介绍 (Introduction)
If you are actively developing an application, using Docker can simplify your workflow and the process of deploying your application to production. Working with containers in development offers the following benefits:
如果您正在积极开发应用程序,则使用Docker可以简化您的工作流程以及将应用程序部署到生产环境的过程。 在开发中使用容器具有以下好处:
- Environments are consistent, meaning that you can choose the languages and dependencies you want for your project without worrying about system conflicts. 环境是一致的,这意味着您可以为项目选择所需的语言和依赖项,而不必担心系统冲突。
- Environments are isolated, making it easier to troubleshoot issues and onboard new team members. 环境是隔离的,因此更易于解决问题和加入新的团队成员。
- Environments are portable, allowing you to package and share your code with others. 环境是可移植的,允许您打包和与他人共享代码。
This tutorial will show you how to set up a development environment for a Ruby on Rails application using Docker. You will create multiple containers – for the application itself, the PostgreSQL database, Redis, and a Sidekiq service – with Docker Compose. The setup will do the following:
本教程将向您展示如何使用Docker为Ruby on Rails应用程序设置开发环境。 您将使用Docker Compose为应用程序本身, PostgreSQL数据库, Redis和Sidekiq服务创建多个容器。 安装程序将执行以下操作:
- Synchronize the application code on the host with the code in the container to facilitate changes during development. 将主机上的应用程序代码与容器中的代码同步,以促进开发过程中的更改。
- Persist application data between container restarts. 在容器重新启动之间保留应用程序数据。
- Configure Sidekiq workers to process jobs as expected. 配置Sidekiq工作程序以按预期方式处理作业。
At the end of this tutorial, you will have a working shark information application running on Docker containers:
在本教程的最后,您将在Docker容器上运行一个有效的shark信息应用程序:
先决条件 (Prerequisites)
To follow this tutorial, you will need:
要遵循本教程,您将需要:
A local development machine or server running Ubuntu 18.04, along with a non-root user with
sudo
privileges and an active firewall. For guidance on how to set these up, please see this Initial Server Setup guide.运行Ubuntu 18.04的本地开发计算机或服务器,以及具有
sudo
特权和活动防火墙的非root用户。 有关如何进行设置的指导,请参阅此初始服务器设置指南 。Docker installed on your local machine or server, following Steps 1 and 2 of How To Install and Use Docker on Ubuntu 18.04.
遵循如何在Ubuntu 18.04上安装和使用Docker的步骤1和2将Docker安装在本地计算机或服务器上 。
Docker Compose installed on your local machine or server, following Step 1 of How To Install Docker Compose on Ubuntu 18.04.
遵循如何在Ubuntu 18.04上安装Docker Compose的步骤1,在本地计算机或服务器上安装Docker Compose 。
步骤1 —克隆项目并添加依赖项 (Step 1 — Cloning the Project and Adding Dependencies)
Our first step will be to clone the rails-sidekiq repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in How To Add Sidekiq and Redis to a Ruby on Rails Application, which explains how to add Sidekiq to an existing Rails 5 project.
我们的第一步是从DigitalOcean社区GitHub帐户中克隆rails-sidekiq存储库。 该存储库包括如何将Sidekiq和Redis添加到Ruby on Rails应用程序中描述的设置代码,该代码说明了如何将Sidekiq添加到现有的Rails 5项目。
Clone the repository into a directory called rails-docker
:
将存储库克隆到名为rails-docker
的目录中:
git clone https://github.com/do-community/rails-sidekiq.git rails-docker
git clone https://github.com/do-community/rails-sidekiq.git rails-docker
Navigate to the rails-docker
directory:
导航到rails-docker
目录:
cd rails-docker
cd rails-docker
In this tutorial we will use PostgreSQL as a database. In order to work with PostgreSQL instead of SQLite 3, you will need to add the pg
gem to the project’s dependencies, which are listed in its Gemfile. Open that file for editing using nano
or your favorite editor:
在本教程中,我们将使用PostgreSQL作为数据库。 为了使用PostgreSQL而不是SQLite 3,您需要将pg
gem添加到项目的依赖项中,该依赖项在其Gemfile中列出。 打开该文件以使用nano
或您喜欢的编辑器进行编辑:
- nano Gemfile 纳米宝石文件
Add the gem anywhere in the main project dependencies (above development dependencies):
在主要项目依赖项中(开发依赖项之上)的任意位置添加gem:
. . .
# Reduces boot times through caching; required in config/boot.rb
gem 'bootsnap', '>= 1.1.0', require: false
gem 'sidekiq', '~>6.0.0'
gem 'pg', '~>1.1.3'
group :development, :test do
. . .
We can also comment out the sqlite
gem, since we won’t be using it anymore:
我们也可以注释掉sqlite
gem ,因为我们将不再使用它:
. . .
# Use sqlite3 as the database for Active Record
# gem 'sqlite3'
. . .
Finally, comment out the spring-watcher-listen
gem under development
:
最后,注释掉正在development
的spring-watcher-listen
宝石 :
. . .
gem 'spring'
# gem 'spring-watcher-listen', '~> 2.0.0'
. . .
If we do not disable this gem, we will see persistent error messages when accessing the Rails console. These error messages derive from the fact that this gem has Rails use listen
to watch for changes in development, rather than polling the filesystem for changes. Because this gem watches the root of the project, including the node_modules
directory, it will throw error messages about which directories are being watched, cluttering the console. If you are concerned about conserving CPU resources, however, disabling this gem may not work for you. In this case, it may be a good idea to upgrade your Rails application to Rails 6.
如果不禁用此gem,则在访问Rails控制台时将看到持续的错误消息。 这些错误消息来自以下事实:该gem使Rails使用listen
监视开发中的更改,而不是轮询文件系统以查看更改。 由于此gem node_modules
项目的根目录(包括node_modules
目录),因此它将引发有关正在监视哪些目录的错误消息,从而使控制台混乱。 但是,如果您担心节省CPU资源,则禁用此gem可能对您不起作用。 在这种情况下,最好将Rails应用程序升级到Rails 6。
Save and close the file when you are finished editing.
完成编辑后,保存并关闭文件。
With your project repository in place, the pg
gem added to your Gemfile, and the spring-watcher-listen
gem commented out, you are ready to configure your application to work with PostgreSQL.
放置好项目存储库之后,将pg
gem添加到Gemfile中,并注释掉spring-watcher-listen
gem,您可以将应用程序配置为与PostgreSQL一起使用。
步骤2 —配置应用程序以与PostgreSQL和Redis一起使用 (Step 2 — Configuring the Application to Work with PostgreSQL and Redis)
To work with PostgreSQL and Redis in development, we will want to do the following:
要在开发中使用PostgreSQL和Redis,我们将要执行以下操作:
- Configure the application to work with PostgreSQL as the default adapter. 配置应用程序以将PostgreSQL作为默认适配器使用。
Add an
.env
file to the project with our database username and password and Redis host.使用我们的数据库用户名和密码以及Redis主机将
.env
文件添加到项目中。Create an
init.sql
script to create asammy
user for the database.创建
init.sql
脚本来创建一个sammy
用户数据库。Add an initializer for Sidekiq so that it can work with our containerized
redis
service.为Sidekiq添加一个初始化程序 ,以便它可以与我们的容器化
redis
服务一起使用。Add the
.env
file and other relevant files to the project’sgitignore
anddockerignore
files.将
.env
文件和其他相关文件添加到项目的gitignore
和dockerignore
文件中。- Create database seeds so that our application has some records for us to work with when we start it up. 创建数据库种子,以便我们的应用程序在启动时有一些记录供我们使用。
First, open your database configuration file, located at config/database.yml
:
首先,打开位于config/database.yml
数据库配置文件:
- nano config/database.yml 纳米配置/数据库.yml
Currently, the file includes the following default
settings, which are applied in the absence of other settings:
当前,该文件包括以下default
设置,这些default
设置在没有其他设置的情况下适用:
default: &default
adapter: sqlite3
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
timeout: 5000
We need to change these to reflect the fact that we will use the postgresql
adapter, since we will be creating a PostgreSQL service with Docker Compose to persist our application data.
我们需要更改这些设置以反映我们将使用postgresql
适配器的事实,因为我们将使用Docker Compose创建一个PostgreSQL服务来持久化我们的应用程序数据。
Delete the code that sets SQLite as the adapter and replace it with the following settings, which will set the adapter appropriately and the other variables necessary to connect:
删除将SQLite设置为适配器的代码,并将其替换为以下设置,这将适当地设置适配器以及连接所需的其他变量:
default: &default
adapter: postgresql
encoding: unicode
database: <%= ENV['DATABASE_NAME'] %>
username: <%= ENV['DATABASE_USER'] %>
password: <%= ENV['DATABASE_PASSWORD'] %>
port: <%= ENV['DATABASE_PORT'] || '5432' %>
host: <%= ENV['DATABASE_HOST'] %>
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
timeout: 5000
. . .
Next, we’ll modify the setting for the development
environment, since this is the environment we’re using in this setup.
接下来,我们将修改development
环境的设置,因为这是我们在此设置中使用的环境。
Delete the existing SQLite database configuration so that section looks like this:
删除现有SQLite数据库配置,使该部分如下所示:
. . .
development:
<<: *default
. . .
Finally, delete the database
settings for the production
and test
environments as well:
最后,还要删除production
和test
环境的database
设置:
. . .
test:
<<: *default
production:
<<: *default
. . .
These modifications to our default database settings will allow us to set our database information dynamically using environment variables defined in .env
files, which will not be committed to version control.
对默认数据库设置的这些修改将使我们能够使用.env
文件中定义的环境变量来动态设置数据库信息,而这些环境变量将不会提交给版本控制。
Save and close the file when you are finished editing.
完成编辑后,保存并关闭文件。
Note that if you are creating a Rails project from scratch, you can set the adapter with the rails new
command, as described in Step 3 of How To Use PostgreSQL with Your Ruby on Rails Application on Ubuntu 18.04. This will set your adapter in config/database.yml
and automatically add the pg
gem to the project.
请注意,如果要从头开始创建Rails项目,则可以使用rails new
命令设置适配器,如如何在Ubuntu 18.04上的Ruby on Rails应用程序中使用PostgreSQL 步骤3中所述。 这将在config/database.yml
设置适配器,并自动将pg
gem添加到项目中。
Now that we have referenced our environment variables, we can create a file for them with our preferred settings. Extracting configuration settings in this way is part of the 12 Factor approach to application development, which defines best practices for application resiliency in distributed environments. Now, when we are setting up our production and test environments in the future, configuring our database settings will involve creating additional .env
files and referencing the appropriate file in our Docker Compose files.
现在我们已经引用了环境变量,我们可以使用我们的首选设置为它们创建一个文件。 以这种方式提取配置设置是应用程序开发的12因素方法的一部分,该方法定义了分布式环境中应用程序弹性的最佳实践。 现在,当我们将来设置生产和测试环境时,配置数据库设置将涉及创建其他.env
文件,并在Docker Compose文件中引用适当的文件。
Open an .env
file:
打开一个.env
文件:
- nano .env 纳米.env
Add the following values to the file:
将以下值添加到文件:
DATABASE_NAME=rails_development
DATABASE_USER=sammy
DATABASE_PASSWORD=shark
DATABASE_HOST=database
REDIS_HOST=redis
In addition to setting our database name, user, and password, we’ve also set a value for the DATABASE_HOST
. The value, database
, refers to the database
PostgreSQL service we will create using Docker Compose. We’ve also set a REDIS_HOST
to specify our redis
service.
除了设置数据库名称,用户和密码外,我们还为DATABASE_HOST
设置了一个值。 值database
是指我们将使用Docker Compose创建的database
PostgreSQL服务。 我们还设置了REDIS_HOST
指定我们redis
服务。
Save and close the file when you are finished editing.
完成编辑后,保存并关闭文件。
To create the sammy
database user, we can write an init.sql
script that we can then mount to the database container when it starts.
要创建sammy
数据库用户,我们可以写一个init.sql
脚本,我们就可以安装到数据库容器启动时。
Open the script file:
打开脚本文件:
- nano init.sql 纳米init.sql
Add the following code to create a sammy
user with administrative privileges:
添加以下代码以创建具有管理特权的sammy
用户:
CREATE USER sammy;
ALTER USER sammy WITH SUPERUSER;
This script will create the appropriate user on the database and grant this user administrative privileges.
该脚本将在数据库上创建适当的用户并授予该用户管理权限。
Set appropriate permissions on the script:
在脚本上设置适当的权限:
- chmod +x init.sql chmod + x init.sql
Next, we’ll configure Sidekiq to work with our containerized redis
service. We can add an initializer to the config/initializers
directory, where Rails looks for configuration settings once frameworks and plugins are loaded, that sets a value for a Redis host.
接下来,我们将配置Sidekiq工作与我们的集装箱redis
服务。 我们可以在config/initializers
目录中添加一个初始化config/initializers
,一旦框架和插件被加载,Rails就会在其中寻找配置设置,该设置为Redis主机设置一个值。
Open a sidekiq.rb
file to specify these settings:
打开sidekiq.rb
文件以指定以下设置:
- nano config/initializers/sidekiq.rb 纳米配置/初始化/ sidekiq.rb
Add the following code to the file to specify values for a REDIS_HOST
and REDIS_PORT
:
将以下代码添加到文件中,以指定REDIS_HOST
和REDIS_PORT
值:
Sidekiq.configure_server do |config|
config.redis = {
host: ENV['REDIS_HOST'],
port: ENV['REDIS_PORT'] || '6379'
}
end
Sidekiq.configure_client do |config|
config.redis = {
host: ENV['REDIS_HOST'],
port: ENV['REDIS_PORT'] || '6379'
}
end
Much like our database configuration settings, these settings give us the ability to set our host and port parameters dynamically, allowing us to substitute the appropriate values at runtime without having to modify the application code itself. In addition to a REDIS_HOST
, we have a default value set for REDIS_PORT
in case it is not set elsewhere.
与数据库配置设置非常相似,这些设置使我们能够动态设置主机和端口参数,从而使我们能够在运行时替换适当的值,而无需修改应用程序代码本身。 除了REDIS_HOST
,我们还为REDIS_PORT
设置了默认值(如果未在其他地方设置)。
Save and close the file when you are finished editing.
完成编辑后,保存并关闭文件。
Next, to ensure that our application’s sensitive data is not copied to version control, we can add .env
to our project’s .gitignore
file, which tells Git which files to ignore in our project. Open the file for editing:
接下来,为了确保我们的应用程序的敏感数据没有复制到版本控制中,我们可以将.env
添加到项目的.gitignore
文件中,该文件告诉Git在项目中要忽略的文件。 打开文件进行编辑:
- nano .gitignore 纳米.gitignore
At the bottom of the file, add an entry for .env
:
在文件底部,为.env
添加一个条目:
yarn-debug.log*
.yarn-integrity
.env
Save and close the file when you are finished editing.
完成编辑后,保存并关闭文件。
Next, we’ll create a .dockerignore
file to set what should not be copied to our containers. Open the file for editing:
接下来,我们将创建一个.dockerignore
文件,以设置不应复制到容器中的内容。 打开文件进行编辑:
- .dockerignore .dockerignore
Add the following code to the file, which tells Docker to ignore some of the things we don’t need copied to our containers:
将以下代码添加到文件中,该文件告诉Docker忽略一些我们不需要复制到容器中的内容:
.DS_Store
.bin
.git
.gitignore
.bundleignore
.bundle
.byebug_history
.rspec
tmp
log
test
config/deploy
public/packs
public/packs-test
node_modules
yarn-error.log
coverage/
Add .env
to the bottom of this file as well:
也将.env
添加到此文件的底部:
. . .
yarn-error.log
coverage/
.env
Save and close the file when you are finished editing.
完成编辑后,保存并关闭文件。
As a final step, we will create some seed data so that our application has a few records when we start it up.
最后一步,我们将创建一些种子数据,以便我们的应用程序在启动时具有一些记录。
Open a file for the seed data in the db
directory:
在db
目录中打开一个种子数据文件:
- nano db/seeds.rb 纳米db / seeds.rb
Add the following code to the file to create four demo sharks and one sample post:
将以下代码添加到文件中,以创建四个演示鲨鱼和一个示例帖子:
# Adding demo sharks
sharks = Shark.create([{ name: 'Great White', facts: 'Scary' }, { name: 'Megalodon', facts: 'Ancient' }, { name: 'Hammerhead', facts: 'Hammer-like' }, { name: 'Speartooth', facts: 'Endangered' }])
Post.create(body: 'These sharks are misunderstood', shark: sharks.first)
This seed data will create four sharks and one post that is associated with the first shark.
该种子数据将创建四个鲨鱼和一个与第一个鲨鱼相关的帖子。
Save and close the file when you are finished editing.
完成编辑后,保存并关闭文件。
With your application configured to work with PostgreSQL and your environment variables created, you are ready to write your application Dockerfile.
将您的应用程序配置为可与PostgreSQL一起使用并创建环境变量之后,就可以编写应用程序Dockerfile了。
第3步—编写Dockerfile和入口点脚本 (Step 3 — Writing the Dockerfile and Entrypoint Scripts)
Your Dockerfile specifies what will be included in your application container when it is created. Using a Dockerfile allows you to define your container environment and avoid discrepancies with dependencies or runtime versions.
您的Dockerfile指定创建后将在应用程序容器中包含的内容。 使用Dockerfile可以定义容器环境,并避免依赖项或运行时版本之间的差异。
Following these guidelines on building optimized containers, we will make our image as efficient as possible by using an Alpine base and attempting to minimize our image layers generally.
遵循这些有关构建优化容器的准则 ,我们将通过使用Alpine底座并尝试最大程度地减少图像层来使图像尽可能高效。
Open a Dockerfile in your current directory:
在当前目录中打开一个Dockerfile:
- nano Dockerfile 纳米Dockerfile
Docker images are created using a succession of layered images that build on one another. Our first step will be to add the base image for our application, which will form the starting point of the application build.
Docker映像是使用一系列相互构建的分层映像创建的。 我们的第一步将是为我们的应用程序添加基础映像 ,这将构成应用程序构建的起点。
Add the following code to the file to add the Ruby alpine image as a base:
将以下代码添加到文件中,以将Ruby高山图像添加为基础:
FROM ruby:2.5.1-alpine
The alpine
image is derived from the Alpine Linux project, and will help us keep our image size down. For more information about whether or not the alpine
image is the right choice for your project, please see the full discussion under the Image Variants section of the Docker Hub Ruby image page.
alpine
图像源自Alpine Linux项目,它将帮助我们减小图像尺寸。 有关alpine
图像是否适合您的项目的更多信息,请参阅Docker Hub Ruby image页面的Image Variants部分下的完整讨论。
Some factors to consider when using alpine
in development:
在开发中使用alpine
时要考虑的一些因素:
- Keeping image size down will decrease page and resource load times, particularly if you also keep volumes to a minimum. This helps keep your user experience in development quick and closer to what it would be if you were working locally in a non-containerized environment. 减小图像大小将减少页面和资源的加载时间,尤其是如果您也将卷保持在最小状态。 这有助于使您的开发用户体验更快,更接近在非容器化环境中本地工作时的体验。
- Having parity between development and production images facilitates successful deployments. Since teams often opt to use Alpine images in production for speed benefits, developing with an Alpine base helps offset issues when moving to production. 在开发映像和生产映像之间具有均等性有助于成功部署。 由于团队经常选择在生产中使用Alpine图像以提高速度,因此以Alpine为基础进行开发有助于抵消生产过程中的问题。
Next, set an environment variable to specify the Bundler version:
接下来,设置一个环境变量以指定Bundler版本:
. . .
ENV BUNDLER_VERSION=2.0.2
This is one of the steps we will take to avoid version conflicts between the default bundler
version available in our environment and our application code, which requires Bundler 2.0.2.
这是我们要采取的步骤之一,以避免环境中可用的默认bundler
版本与我们的应用程序代码(需要Bundler 2.0.2)之间发生版本冲突。
Next, add the packages that you need to work with the application to the Dockerfile:
接下来,将与应用程序一起使用所需的软件包添加到Dockerfile中:
. . .
RUN apk add --update --no-cache \
binutils-gold \
build-base \
curl \
file \
g++ \
gcc \
git \
less \
libstdc++ \
libffi-dev \
libc-dev \
linux-headers \
libxml2-dev \
libxslt-dev \
libgcrypt-dev \
make \
netcat-openbsd \
nodejs \
openssl \
pkgconfig \
postgresql-dev \
python \
tzdata \
yarn
These packages include nodejs
and yarn
, among others. Since our application serves assets with webpack, we need to include Node.js and Yarn for the application to work as expected.
这些软件包包括nodejs
和yarn
等等。 由于我们的应用程序通过webpack服务资产,因此我们需要包含Node.js和Yarn,以使该应用程序能够按预期工作。
Keep in mind that the alpine
image is extremely minimal: the packages listed here are not exhaustive of what you might want or need in development when you are containerizing your own application.
请记住, alpine
映像非常少:当您打包自己的应用程序时,此处列出的软件包并没有穷尽您可能需要的开发内容。
Next, install the appropriate bundler
version:
接下来,安装适当的bundler
版本:
. . .
RUN gem install bundler -v 2.0.2
This step will guarantee parity between our containerized environment and the specifications in this project’s Gemfile.lock
file.
此步骤将确保我们的容器化环境与该项目的Gemfile.lock
文件中的规范之间的Gemfile.lock
。
Now set the working directory for the application on the container:
现在在容器上为应用程序设置工作目录:
. . .
WORKDIR /app
Copy over your Gemfile
and Gemfile.lock
:
复制您的Gemfile
和Gemfile.lock
:
. . .
COPY Gemfile Gemfile.lock ./
Copying these files as an independent step, followed by bundle install
, means that the project gems do not need to be rebuilt every time you make changes to your application code. This will work in conjunction with the gem volume that we will include in our Compose file, which will mount gems to your application container in cases where the service is recreated but project gems remain the same.
将这些文件作为一个独立的步骤进行复制,然后进行bundle install
,这意味着不需要在每次更改应用程序代码时都重建项目gem。 这将与我们将包含在Compose文件中的gem卷一起使用,在重新创建服务但项目gem保持不变的情况下,gem卷会将gem安装到您的应用程序容器中。
Next, set the configuration options for the nokogiri
gem build:
接下来,为nokogiri
gem build设置配置选项:
. . .
RUN bundle config build.nokogiri --use-system-libraries
. . .
This step builds nokigiri
with the libxml2
and libxslt
library versions that we added to the application container in the RUN apk add…
step above.
此步骤使用我们在上面的RUN apk add…
步骤中添加到应用程序容器中的libxml2
和libxslt
库版本构建nokigiri
。
Next, install the project gems:
接下来,安装项目gem:
. . .
RUN bundle check || bundle install
This instruction checks that the gems are not already installed before installing them.
此说明在安装gem之前检查它们是否尚未安装。
Next, we’ll repeat the same procedure that we used with gems with our JavaScript packages and dependencies. First we’ll copy package metadata, then we’ll install dependencies, and finally we’ll copy the application code into the container image.
接下来,我们将重复使用与带有gemJavaScript包和依赖项的gem相同的过程。 首先,我们将复制包元数据,然后将安装依赖项,最后将应用程序代码复制到容器映像中。
To get started with the Javascript section of our Dockerfile, copy package.json
and yarn.lock
from your current project directory on the host to the container:
要开始使用我们的Dockerfile的Javascript部分,请将package.json
和yarn.lock
从主机上当前的项目目录复制到容器:
. . .
COPY package.json yarn.lock ./
Then install the required packages with yarn install
:
然后使用yarn install
所需的软件包:
. . .
RUN yarn install --check-files
This instruction includes a --check-files
flag with the yarn
command, a feature that makes sure any previously installed files have not been removed. As in the case of our gems, we will manage the persistence of the packages in the node_modules
directory with a volume when we write our Compose file.
该指令包括带有yarn
命令的--check-files
标志,该功能可确保未删除任何以前安装的文件。 与使用gem一样,在编写Compose文件时,我们将使用卷来管理node_modules
目录中软件包的持久性。
Finally, copy over the rest of the application code and start the application with an entrypoint script:
最后,复制其余的应用程序代码,并使用入口点脚本启动应用程序:
. . .
COPY . ./
ENTRYPOINT ["./entrypoints/docker-entrypoint.sh"]
Using an entrypoint script allows us to run the container as an executable.
使用入口点脚本可以使我们将容器作为可执行文件运行 。
The final Dockerfile will look like this:
最终的Dockerfile如下所示:
FROM ruby:2.5.1-alpine
ENV BUNDLER_VERSION=2.0.2
RUN apk add --update --no-cache \
binutils-gold \
build-base \
curl \
file \
g++ \
gcc \
git \
less \
libstdc++ \
libffi-dev \
libc-dev \
linux-headers \
libxml2-dev \
libxslt-dev \
libgcrypt-dev \
make \
netcat-openbsd \
nodejs \
openssl \
pkgconfig \
postgresql-dev \
python \
tzdata \
yarn
RUN gem install bundler -v 2.0.2
WORKDIR /app
COPY Gemfile Gemfile.lock ./
RUN bundle config build.nokogiri --use-system-libraries
RUN bundle check || bundle install
COPY package.json yarn.lock ./
RUN yarn install --check-files
COPY . ./
ENTRYPOINT ["./entrypoints/docker-entrypoint.sh"]
Save and close the file when you are finished editing.
完成编辑后,保存并关闭文件。
Next, create a directory called entrypoints
for the entrypoint scripts:
接下来,为入口点脚本创建一个名为entrypoints
的目录:
- mkdir entrypoints mkdir入口点
This directory will include our main entrypoint script and a script for our Sidekiq service.
该目录将包括我们的主要入口点脚本和Sidekiq服务的脚本。
Open the file for the application entrypoint script:
打开应用程序入口点脚本的文件:
- nano entrypoints/docker-entrypoint.sh 纳米入口点/ docker-entrypoint.sh
Add the following code to the file:
将以下代码添加到文件中:
#!/bin/sh
set -e
if [ -f tmp/pids/server.pid ]; then
rm tmp/pids/server.pid
fi
bundle exec rails s -b 0.0.0.0
The first important line is set -e
, which tells the /bin/sh
shell that runs the script to fail fast if there are any problems later in the script. Next, the script checks that tmp/pids/server.pid
is not present to ensure that there won’t be server conflicts when we start the application. Finally, the script starts the Rails server with the bundle exec rails s
command. We use the -b
option with this command to bind the server to all IP addresses rather than to the default, localhost
. This invocation makes the Rails server route incoming requests to the container IP rather than to the default localhost
.
重要的第一行是set -e
,它告诉/bin/sh
运行脚本的shell很快就会失败,如果该脚本稍后出现任何问题。 接下来,该脚本检查tmp/pids/server.pid
是否不存在,以确保启动应用程序时不会发生服务器冲突。 最后,脚本使用bundle exec rails s
命令启动Rails服务器。 我们在此命令中使用-b
选项将服务器绑定到所有IP地址,而不是默认的localhost
。 该调用使Rails服务器将传入的请求路由到容器IP,而不是默认的localhost
。
Save and close the file when you are finished editing.
完成编辑后,保存并关闭文件。
Make the script executable:
使脚本可执行:
- chmod +x entrypoints/docker-entrypoint.sh chmod + x入口点/docker-entrypoint.sh
Next, we will create a script to start our sidekiq
service, which will process our Sidekiq jobs. For more information about how this application uses Sidekiq, please see How To Add Sidekiq and Redis to a Ruby on Rails Application.
接下来,我们将创建一个脚本以启动我们的sidekiq
服务,该脚本将处理我们的Sidekiq作业。 有关此应用程序如何使用Sidekiq的更多信息,请参见如何将Sidekiq和Redis添加到Ruby on Rails应用程序中 。
Open a file for the Sidekiq entrypoint script:
打开Sidekiq入口点脚本的文件:
- nano entrypoints/sidekiq-entrypoint.sh 纳米入口点/sidekiq-entrypoint.sh
Add the following code to the file to start Sidekiq:
将以下代码添加到文件中以启动Sidekiq:
#!/bin/sh
set -e
if [ -f tmp/pids/server.pid ]; then
rm tmp/pids/server.pid
fi
bundle exec sidekiq
This script starts Sidekiq in the context of our application bundle.
该脚本在我们的应用程序包的上下文中启动Sidekiq。
Save and close the file when you are finished editing. Make it executable:
完成编辑后,保存并关闭文件。 使它可执行:
- chmod +x entrypoints/sidekiq-entrypoint.sh chmod + x入口点/sidekiq-entrypoint.sh
With your entrypoint scripts and Dockerfile in place, you are ready to define your services in your Compose file.
有了入口点脚本和Dockerfile,您就可以在Compose文件中定义服务了。
第4步—使用Docker Compose定义服务 (Step 4 — Defining Services with Docker Compose)
Using Docker Compose, we will be able to run the multiple containers required for our setup. We will define our Compose services in our main docker-compose.yml
file. A service in Compose is a running container, and service definitions — which you will include in your docker-compose.yml
file — contain information about how each container image will run. The Compose tool allows you to define multiple services to build multi-container applications.
使用Docker Compose,我们将能够运行设置所需的多个容器。 我们将在主docker-compose.yml
文件中定义Compose 服务 。 Compose中的服务是一个正在运行的容器,服务定义(将包含在docker-compose.yml
文件中)包含有关每个容器映像如何运行的信息。 使用Compose工具,您可以定义多个服务来构建多容器应用程序。
Our application setup will include the following services:
我们的应用程序设置将包括以下服务:
- The application itself 应用程序本身
- The PostgreSQL database PostgreSQL数据库
- Redis 雷迪斯
- Sidekiq Sidekiq
We will also include a bind mount as part of our setup, so that any code changes we make during development will be immediately synchronized with the containers that need access to this code.
我们还将在安装程序中包括绑定安装,以便在开发过程中进行的任何代码更改将立即与需要访问此代码的容器同步。
Note that we are not defining a test
service, since testing is outside of the scope of this tutorial and series, but you could do so by following the precedent we are using here for the sidekiq
service.
请注意,由于测试不在本教程和本系列的范围之内,因此我们并未定义test
服务,但是您可以按照在此为sidekiq
服务使用的先例来进行sidekiq
。
Open the docker-compose.yml
file:
打开docker-compose.yml
文件:
- nano docker-compose.yml 纳米docker-compose.yml
First, add the application service definition:
首先,添加应用程序服务定义:
version: '3.4'
services:
app:
build:
context: .
dockerfile: Dockerfile
depends_on:
- database
- redis
ports:
- "3000:3000"
volumes:
- .:/app
- gem_cache:/usr/local/bundle/gems
- node_modules:/app/node_modules
env_file: .env
environment:
RAILS_ENV: development
The app
service definition includes the following options:
app
服务定义包括以下选项:
build
: This defines the configuration options, including thecontext
anddockerfile
, that will be applied when Compose builds the application image. If you wanted to use an existing image from a registry like Docker Hub, you could use theimage
instruction instead, with information about your username, repository, and image tag.build
:这定义配置选项,包括context
和dockerfile
,这些选项将在Compose构建应用程序映像时应用。 如果要使用Docker Hub之类的注册表中的现有映像,则可以改用image
指令 ,其中包含有关用户名,存储库和image标记的信息。context
: This defines the build context for the image build — in this case, the current project directory.context
:这定义了图像构建的构建上下文,在本例中为当前项目目录。dockerfile
: This specifies theDockerfile
in your current project directory as the file Compose will use to build the application image.dockerfile
:这将指定当前项目目录中的Dockerfile
,因为Compose将使用该文件来构建应用程序映像。depends_on
: This sets up thedatabase
andredis
containers first so that they are up and running beforeapp
.depends_on
:这将首先设置database
和redis
容器,以便它们在app
之前启动并运行。ports
: This maps port3000
on the host to port3000
on the container.ports
:此映射端口3000
的主机端口3000
的容器上。-
The first is a bind mount that mounts our application code on the host to the
/app
directory on the container. This will facilitate rapid development, since any changes you make to your host code will be populated immediately in the container.第一个是绑定安装 ,它将主机上的应用程序代码安装到容器上的
/app
目录。 这将有助于快速开发,因为您对主机代码所做的任何更改都会立即填充到容器中。The second is a named volume,
gem_cache
. When thebundle install
instruction runs in the container, it will install the project gems. Adding this volume means that if you recreate the container, the gems will be mounted to the new container. This mount presumes that there haven’t been any changes to the project, so if you do make changes to your project gems in development, you will need to remember to delete this volume before recreating your application service.第二个是命名卷
gem_cache
。 当bundle install
指令在容器中运行时,它将安装项目gem。 添加此卷意味着,如果您重新创建容器,则宝石将被安装到新容器中。 此挂载假定项目没有任何更改,因此,如果您确实对开发中的项目gem进行了更改,则需要记住在重新创建应用程序服务之前删除此卷。The third volume is a named volume for the
node_modules
directory. Rather than havingnode_modules
mounted to the host, which can lead to package discrepancies and permissions conflicts in development, this volume will ensure that the packages in this directory are persisted and reflect the current state of the project. Again, if you modify the project’s Node dependencies, you will need to remove and recreate this volume.第三个卷是
node_modules
目录的命名卷。 该卷将确保该目录中的软件包node_modules
并反映项目的当前状态,而不是将node_modules
安装到主机上(这会导致软件包差异和开发中的权限冲突),而不是将node_modules
安装到主机上。 同样,如果您修改项目的节点依赖性,则需要删除并重新创建该卷。
volumes
: We are including two types of mounts here:volumes
:这里包括两种类型的安装座: env_file
: This tells Compose that we would like to add environment variables from a file called.env
located in the build context.env_file
:这告诉Compose我们想从位于构建上下文中的名为.env
的文件中添加环境变量。environment
: Using this option allows us to set a non-sensitive environment variable, passing information about the Rails environment to the container.environment
:使用此选项,我们可以设置一个非敏感的环境变量,将有关Rails环境的信息传递给容器。
Next, below the app
service definition, add the following code to define your database
service:
接下来,在app
服务定义下方,添加以下代码以定义您的database
服务:
. . .
database:
image: postgres:12.1
volumes:
- db_data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
Unlike the app
service, the database
service pulls a postgres
image directly from Docker Hub. Note that we’re also pinning the version here, rather than setting it to latest
or not specifying it (which defaults to latest
). This way, we can ensure that this setup works with the versions specified here and avoid unexpected surprises with breaking code changes to the image.
与app
服务不同, database
服务直接从Docker Hub提取一个postgres
映像。 请注意,我们也在此处固定版本,而不是将其设置为latest
或未指定(默认为latest
)。 这样,我们可以确保此设置适用于此处指定的版本,并避免由于破坏图像的代码更改而引起意外情况。
We are also including a db_data
volume here, which will persist our application data in between container starts. Additionally, we’ve mounted our init.sql
startup script to the appropriate directory, docker-entrypoint-initdb.d/
on the container, in order to create our sammy
database user. After the image entrypoint creates the default postgres
user and database, it will run any scripts found in the docker-entrypoint-initdb.d/
directory, which you can use for necessary initialization tasks. For more details, look at the Initialization scripts section of the PostgreSQL image documentation
我们还在此处包括一个db_data
卷,它将在容器启动之间保留我们的应用程序数据。 此外,我们已经将init.sql
启动脚本安装到容器上的相应目录docker-entrypoint-initdb.d/
上,以创建sammy
数据库用户。 映像入口点创建默认的postgres
用户和数据库后,它将运行在docker-entrypoint-initdb.d/
目录中找到的所有脚本,您可以将它们用于必要的初始化任务。 有关更多详细信息,请参见PostgreSQL图像文档的“ 初始化脚本”部分。
Next, add the redis
service definition:
接下来,添加redis
服务定义:
. . .
redis:
image: redis:5.0.7
Like the database
service, the redis
service uses an image from Docker Hub. In this case, we are not persisting the Sidekiq job cache.
像database
服务一样, redis
服务使用来自Docker Hub的映像。 在这种情况下,我们不会保留Sidekiq作业缓存。
Finally, add the sidekiq
service definition:
最后,添加sidekiq
服务定义:
. . .
sidekiq:
build:
context: .
dockerfile: Dockerfile
depends_on:
- app
- database
- redis
volumes:
- .:/app
- gem_cache:/usr/local/bundle/gems
- node_modules:/app/node_modules
env_file: .env
environment:
RAILS_ENV: development
entrypoint: ./entrypoints/sidekiq-entrypoint.sh
Our sidekiq
service resembles our app
service in a few respects: it uses the same build context and image, environment variables, and volumes. However, it is dependent on the app
, redis
, and database
services, and so will be the last to start. Additionally, it uses an entrypoint
that will override the entrypoint set in the Dockerfile. This entrypoint
setting points to entrypoints/sidekiq-entrypoint.sh
, which includes the appropriate command to start the sidekiq
service.
我们的sidekiq
服务在某些方面类似于我们的app
服务:它使用相同的构建上下文和映像,环境变量和卷。 但是,它取决于app
, redis
和database
服务,因此将是最后一个启动。 此外,它使用的entrypoint
点将覆盖Dockerfile中设置的入口点。 此entrypoint
点设置指向entrypoints/sidekiq-entrypoint.sh
,其中包括启动sidekiq
服务的适当命令。
As a final step, add the volume definitions below the sidekiq
service definition:
最后一步,将卷定义添加到sidekiq
服务定义下面:
. . .
volumes:
gem_cache:
db_data:
node_modules:
Our top-level volumes key defines the volumes gem_cache
, db_data
, and node_modules
. When Docker creates volumes, the contents of the volume are stored in a part of the host filesystem, /var/lib/docker/volumes/
, that’s managed by Docker. The contents of each volume are stored in a directory under /var/lib/docker/volumes/
and get mounted to any container that uses the volume. In this way, the shark information data that our users will create will persist in the db_data
volume even if we remove and recreate the database
service.
我们的顶级卷关键字定义了gem_cache
, db_data
和node_modules
。 Docker创建卷时,卷的内容存储在由Docker管理的主机文件系统/var/lib/docker/volumes/
的一部分中。 每个卷的内容存储在/var/lib/docker/volumes/
下的目录中,并挂载到使用该卷的任何容器中。 这样,即使我们删除并重新创建database
服务,我们用户将创建的鲨鱼信息数据也将保留在db_data
卷中。
The finished file will look like this:
完成的文件将如下所示:
version: '3.4'
services:
app:
build:
context: .
dockerfile: Dockerfile
depends_on:
- database
- redis
ports:
- "3000:3000"
volumes:
- .:/app
- gem_cache:/usr/local/bundle/gems
- node_modules:/app/node_modules
env_file: .env
environment:
RAILS_ENV: development
database:
image: postgres:12.1
volumes:
- db_data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
redis:
image: redis:5.0.7
sidekiq:
build:
context: .
dockerfile: Dockerfile
depends_on:
- app
- database
- redis
volumes:
- .:/app
- gem_cache:/usr/local/bundle/gems
- node_modules:/app/node_modules
env_file: .env
environment:
RAILS_ENV: development
entrypoint: ./entrypoints/sidekiq-entrypoint.sh
volumes:
gem_cache:
db_data:
node_modules:
Save and close the file when you are finished editing.
完成编辑后,保存并关闭文件。
With your service definitions written, you are ready to start the application.
编写好服务定义后,就可以启动应用程序了。
第5步-测试应用程序 (Step 5 — Testing the Application)
With your docker-compose.yml
file in place, you can create your services with the docker-compose up
command and seed your database. You can also test that your data will persist by stopping and removing your containers with docker-compose down
and recreating them.
放置docker-compose.yml
文件后,您可以使用docker-compose.yml
docker-compose up
命令创建服务并为数据库添加种子。 您还可以通过使用docker-compose down
停止并删除容器并重新创建它们来测试数据是否将持久化。
First, build the container images and create the services by running docker-compose up
with the -d
flag, which will run the containers in the background:
首先,通过运行带有-d
标志的docker-compose up
来构建容器映像并创建服务,这将在后台运行容器:
- docker-compose up -d docker-compose up -d
You will see output that your services have been created:
您将看到输出,表明您的服务已创建:
Output
Creating rails-docker_database_1 ... done
Creating rails-docker_redis_1 ... done
Creating rails-docker_app_1 ... done
Creating rails-docker_sidekiq_1 ... done
You can also get more detailed information about the startup processes by displaying the log output from the services:
您还可以通过显示服务的日志输出来获取有关启动过程的更多详细信息:
- docker-compose logs docker-撰写日志
You will see something like this if everything has started correctly:
如果一切都已正确启动,您将看到以下内容:
Output
sidekiq_1 | 2019-12-19T15:05:26.365Z pid=6 tid=grk7r6xly INFO: Booting Sidekiq 6.0.3 with redis options {:host=>"redis", :port=>"6379", :id=>"Sidekiq-server-PID-6", :url=>nil}
sidekiq_1 | 2019-12-19T15:05:31.097Z pid=6 tid=grk7r6xly INFO: Running in ruby 2.5.1p57 (2018-03-29 revision 63029) [x86_64-linux-musl]
sidekiq_1 | 2019-12-19T15:05:31.097Z pid=6 tid=grk7r6xly INFO: See LICENSE and the LGPL-3.0 for licensing details.
sidekiq_1 | 2019-12-19T15:05:31.097Z pid=6 tid=grk7r6xly INFO: Upgrade to Sidekiq Pro for more features and support: http://sidekiq.org
app_1 | => Booting Puma
app_1 | => Rails 5.2.3 application starting in development
app_1 | => Run `rails server -h` for more startup options
app_1 | Puma starting in single mode...
app_1 | * Version 3.12.1 (ruby 2.5.1-p57), codename: Llamas in Pajamas
app_1 | * Min threads: 5, max threads: 5
app_1 | * Environment: development
app_1 | * Listening on tcp://0.0.0.0:3000
app_1 | Use Ctrl-C to stop
. . .
database_1 | PostgreSQL init process complete; ready for start up.
database_1 |
database_1 | 2019-12-19 15:05:20.160 UTC [1] LOG: starting PostgreSQL 12.1 (Debian 12.1-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
database_1 | 2019-12-19 15:05:20.160 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
database_1 | 2019-12-19 15:05:20.160 UTC [1] LOG: listening on IPv6 address "::", port 5432
database_1 | 2019-12-19 15:05:20.163 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
database_1 | 2019-12-19 15:05:20.182 UTC [63] LOG: database system was shut down at 2019-12-19 15:05:20 UTC
database_1 | 2019-12-19 15:05:20.187 UTC [1] LOG: database system is ready to accept connections
. . .
redis_1 | 1:M 19 Dec 2019 15:05:18.822 * Ready to accept connections
You can also check the status of your containers with docker-compose ps
:
您还可以使用docker-compose ps
检查容器的状态:
- docker-compose ps 码头工人组成ps
You will see output indicating that your containers are running:
您将看到指示容器正在运行的输出:
Output
Name Command State Ports
-----------------------------------------------------------------------------------------
rails-docker_app_1 ./entrypoints/docker-resta ... Up 0.0.0.0:3000->3000/tcp
rails-docker_database_1 docker-entrypoint.sh postgres Up 5432/tcp
rails-docker_redis_1 docker-entrypoint.sh redis ... Up 6379/tcp
rails-docker_sidekiq_1 ./entrypoints/sidekiq-entr ... Up
Next, create and seed your database and run migrations on it with the following docker-compose exec
command:
接下来,使用以下docker-compose exec
命令创建并播种数据库,然后在数据库上运行迁移:
- docker-compose exec app bundle exec rake db:setup db:migrate docker-compose exec应用程序捆绑包exec rake db:setup db:migrate
The docker-compose exec
command allows you to run commands in your services; we are using it here to run rake db:setup
and db:migrate
in the context of our application bundle to create and seed the database and run migrations. As you work in development, docker-compose exec
will prove useful to you when you want to run migrations against your development database.
docker-compose exec
命令允许您在服务中运行命令; 我们在这里使用它在应用程序捆绑包的上下文中运行rake db:setup
和db:migrate
来创建和播种数据库并运行迁移。 在开发过程中,当您要对开发数据库运行迁移时, docker-compose exec
将对您有用。
You will see the following output after running this command:
运行此命令后,您将看到以下输出:
Output
Created database 'rails_development'
Database 'rails_development' already exists
-- enable_extension("plpgsql")
-> 0.0140s
-- create_table("endangereds", {:force=>:cascade})
-> 0.0097s
-- create_table("posts", {:force=>:cascade})
-> 0.0108s
-- create_table("sharks", {:force=>:cascade})
-> 0.0050s
-- enable_extension("plpgsql")
-> 0.0173s
-- create_table("endangereds", {:force=>:cascade})
-> 0.0088s
-- create_table("posts", {:force=>:cascade})
-> 0.0128s
-- create_table("sharks", {:force=>:cascade})
-> 0.0072s
With your services running, you can visit localhost:3000
or http://your_server_ip:3000
in the browser. You will see a landing page that looks like this:
随着服务的运行,您可以在浏览器中访问localhost:3000
或http://your_server_ip:3000
。 您将看到一个如下所示的登录页面:
We can now test data persistence. Create a new shark by clicking on Get Shark Info button, which will take you to the sharks/index
route:
现在,我们可以测试数据持久性。 单击获取鲨鱼信息按钮创建一个新的鲨鱼,这将带您进入sharks/index
路线:
To verify that the application is working, we can add some demo information to it. Click on New Shark. You will be prompted for a username (sammy) and password (shark), thanks to the project’s authentication settings.
为了验证该应用程序是否正常运行,我们可以向其中添加一些演示信息。 单击“ 新鲨鱼” 。 由于项目的身份验证设置 ,系统将提示您输入用户名( sammy )和密码( shark )。
On the New Shark page, input “Mako” into the Name field and “Fast” into the Facts field.
在“ 新鲨鱼”页面上,在“ 名称”字段中输入“ Mako”,在“ 事实”字段中输入“ Fast”。
Click on the Create Shark button to create the shark. Once you have created the shark, click Home on the site’s navbar to get back to the main application landing page. We can now test that Sidekiq is working.
单击创建鲨鱼按钮以创建鲨鱼。 创建鲨鱼后,请在站点的导航栏上单击“ 主页 ”以返回到主应用程序登录页面。 现在,我们可以测试Sidekiq是否正常工作。
Click on the Which Sharks Are in Danger? button. Since you have not uploaded any endangered sharks, this will take you to the endangered
index
view:
单击哪些鲨鱼处于危险之中? 按钮。 由于您尚未上载任何濒临灭绝的鲨鱼,因此将您带到endangered
index
视图:
Click on Import Endangered Sharks to import the sharks. You will see a status message telling you that the sharks have been imported:
单击“ 导入濒临灭绝的鲨鱼”以导入鲨鱼。 您将看到一条状态消息,告诉您鲨鱼已被导入:
You will also see the beginning of the import. Refresh your page to see the entire table:
您还将看到导入的开始。 刷新页面以查看整个表格:
Thanks to Sidekiq, our large batch upload of endangered sharks has succeeded without locking up the browser or interfering with other application functionality.
感谢Sidekiq,我们成功上传了大批濒临灭绝的鲨鱼,而没有锁定浏览器或不干扰其他应用程序功能。
Click on the Home button at the bottom of the page, which will bring you back to the application main page:
单击页面底部的“ 主页”按钮,这将使您返回到应用程序主页:
From here, click on Which Sharks Are in Danger? again. You will see the uploaded sharks once again.
从这里,单击哪些鲨鱼处于危险之中? 再次。 您将再次看到上传的鲨鱼。
Now that we know our application is working properly, we can test our data persistence.
现在我们知道我们的应用程序运行正常,我们可以测试数据持久性了。
Back at your terminal, type the following command to stop and remove your containers:
返回终端,键入以下命令以停止和删除容器:
- docker-compose down 码头工人组成
Note that we are not including the --volumes
option; hence, our db_data
volume is not removed.
注意,我们不包括--volumes
选项; 因此,不会删除我们的db_data
卷。
The following output confirms that your containers and network have been removed:
以下输出确认您的容器和网络已被删除:
Output
Stopping rails-docker_sidekiq_1 ... done
Stopping rails-docker_app_1 ... done
Stopping rails-docker_database_1 ... done
Stopping rails-docker_redis_1 ... done
Removing rails-docker_sidekiq_1 ... done
Removing rails-docker_app_1 ... done
Removing rails-docker_database_1 ... done
Removing rails-docker_redis_1 ... done
Removing network rails-docker_default
Recreate the containers:
重新创建容器:
- docker-compose up -d docker-compose up -d
Open the Rails console on the app
container with docker-compose exec
and bundle exec rails console
:
使用docker-compose exec
和bundle exec rails console
打开app
容器上的Rails bundle exec rails console
:
- docker-compose exec app bundle exec rails console docker-compose exec应用程序捆绑包exec rails控制台
At the prompt, inspect the last
Shark record in the database:
在提示符下,检查数据库中的last
Shark记录:
- Shark.last.inspect 鲨鱼最后检查
You will see the record you just created:
您将看到刚刚创建的记录:
IRB session
Shark Load (1.0ms) SELECT "sharks".* FROM "sharks" ORDER BY "sharks"."id" DESC LIMIT $1 [["LIMIT", 1]]
=> "#<Shark id: 5, name: \"Mako\", facts: \"Fast\", created_at: \"2019-12-20 14:03:28\", updated_at: \"2019-12-20 14:03:28\">"
You can then check to see that your Endangered
sharks have been persisted with the following command:
然后,您可以使用以下命令检查您的Endangered
鲨鱼是否已被持久保存:
- Endangered.all.count 濒临灭绝
IRB session
(0.8ms) SELECT COUNT(*) FROM "endangereds"
=> 73
Your db_data
volume was successfully mounted to the recreated database
service, making it possible for your app
service to access the saved data. If you navigate directly to the index
shark
page by visiting localhost:3000/sharks
or http://your_server_ip:3000/sharks
you will also see that record displayed:
您的db_data
卷已成功装入到重新创建的database
服务中,从而使您的app
服务可以访问保存的数据。 如果通过访问localhost:3000/sharks
或http://your_server_ip:3000/sharks
直接导航到index
shark
页面,您还将看到显示的记录:
Your endangered sharks will also be at the localhost:3000/endangered/data
or http://your_server_ip:3000/endangered/data
view:
您濒临灭绝的鲨鱼还将位于localhost:3000/endangered/data
或http://your_server_ip:3000/endangered/data
视图中:
Your application is now running on Docker containers with data persistence and code synchronization enabled. You can go ahead and test out local code changes on your host, which will be synchronized to your container thanks to the bind mount we defined as part of the app
service.
您的应用程序现在在启用了数据持久性和代码同步的Docker容器上运行。 您可以继续在主机上测试本地代码更改,这要归功于我们在app
服务中定义的绑定安装,这些更改将同步到您的容器。
结论 (Conclusion)
By following this tutorial, you have created a development setup for your Rails application using Docker containers. You’ve made your project more modular and portable by extracting sensitive information and decoupling your application’s state from your code. You have also configured a boilerplate docker-compose.yml
file that you can revise as your development needs and requirements change.
通过遵循本教程,您已经使用Docker容器为Rails应用程序创建了开发设置。 通过提取敏感信息并从代码中分离应用程序的状态,使您的项目更具模块化和可移植性 。 您还配置了样板docker-compose.yml
文件,可以根据开发需求和需求的变化对其进行修改。
As you develop, you may be interested in learning more about designing applications for containerized and Cloud Native workflows. Please see Architecting Applications for Kubernetes and Modernizing Applications for Kubernetes for more information on these topics. Or, if you would like to invest in a Kubernetes learning sequence, please have a look at out Kubernetes for Full-Stack Developers curriculum.
在开发过程中,您可能有兴趣了解有关为容器化和Cloud Native工作流设计应用程序的更多信息。 有关这些主题的更多信息,请参见为Kubernetes设计架构应用程序和为Kubernetes 现代化应用程序 。 或者,如果您想投资Kubernetes学习序列,请查看Kubernetes的Full-Stack Developers课程 。
To learn more about the application code itself, please see the other tutorials in this series:
要了解有关应用程序代码本身的更多信息,请参见本系列的其他教程:
How To Create Nested Resources for a Ruby on Rails Application