Developing with Docker and Webpack

Developing with Docker and Webpack

One of the big advantages of using containers in general is that you can keep your environments (whether that’s dev, test, staging, prod or even another developer’s machine) the same, which helps in tracking down those “prod only” bugs. Because of that, I won’t be using webpack-dev-server. I know, I know, but I’m really interested in making sure that my development environment matches the others as closely as possible, and webpack-dev-server is definitely not a production ready server. So, here’s the plan:

  • Create a small container to act as a web server. I’m going to use nginx for this, but there’s nothing stopping you from using your favourite web server.
  • Create another container that will transpile local source code. The idea here is that we want webpack to watch for changes on our host machine and transpile the javascript files, then share them with the container.
  • Create a shared volume for the two containers. This volume will hold the result of the transpilation above and will be available to both the nginx and the webpack containers.

Dependencies

First things first: if you’ve never used Docker before, you’ll need to install the Docker Toolbox. It contains all you need to get started spinning up containers and such. I also suggest you install Kitematic (it’s part of the toolbox). We’ll be setting everything up via the command line, so for the most part we won’t be using Kitematic, but it provides great access to container logs so you don’t have to go hunting for them in the event you run into a problem.

Once the toolbox is installed, open up a Docker Quickstart Terminal. It’s from here we’ll be running all of our commands.

Nginx Container

We’ll start with the nginx container as it’s the simpler of the two. All we’re trying to accomplish here is to set up a simple web server. Straightforward, right? There are prebuilt Nginx example Docker images available, but where’s the fun in that? We’ll put together an image from scratch just to see how everything works. We’ll need a Dockerfile, which describes what your image is going to look like, what it’ll run and the things it’ll do. Here’s my really simple Dockerfile for the nginx container. We’ll call this one `docker.nginx`.

# docker.nginx

FROM nginx

RUN mkdir /wwwroot
COPY nginx.conf /etc/nginx/nginx.conf
RUN service nginx start

Ok, lets break this down.

  • FROM nginx – Dockerfiles have a sort-of inheritance built into them. This line indicates that we’re inheriting from the official nginx Docker image, which gives us a Ubuntu container with Nginx preinstalled. Without this inheritance, Docker would be much more difficult to use, as you’d have to build your Docker images from the ground up. If we built our image with just this statement, it would mirror the official image.
  • COPY nginx.conf /etc/nginx/nginx.conf – This line safely assumes you want to make some modifications to your nginx web server. Take a look at the nginx.conf here.
  • RUN service nginx start – Unsurprisingly, this starts the nginx service.

Now that we’ve got a Dockerfile ready to go, we’ll need to build the image. We’ll call the image my-nginx to separate it from the official nginx Docker image.

docker build -t my-nginx -f docker.nginx .

Here, we’re running the Docker build command with two arguments: -t indicates a “tag”, which is how we’ll reference the built image later, and the -f parameter tells Docker which Dockerfile to use. The . at the end of the command is important, as it tells Docker the context for the image.

So far so good? Ok, lets move on to the more complicated webpack container.

Webpack Container

This container’s Dockerfile is a little more involved than the last. It’s called `docker.webpack`.

# docker.webpack

FROM ubuntu:latest

WORKDIR /app
COPY . /app

RUN apt-get update
RUN apt-get install curl -y
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash - && apt-get install nodejs -y
RUN npm install webpack -g
RUN npm install
CMD webpack --watch --watch-polling

Lets break it down.

  • FROM ubuntu:latest – This indicates that we’re building just a standard Ubuntu container. The official Node Docker container expects us to be running a Node app, which isn’t what we’ll be doing, so we’ll have to install Node manually.
  • WORKDIR /app – The /app directory is where our source code will live. The WORKDIR statement indicates to Docker that every subsequent command should be run from this directory.
  • COPY . /app – Now we’re copying our code from the host machine to the container. The . indicates that we want to copy from the current working host directory, and the /app is the destination folder in the container.
  • RUN apt-get update – Update our apt sources.
  • RUN apt-get install curl -y – We’re going to use curl to install node, so we’ll need to install it.
  • RUN curl -sL https://deb.nodesource.com/setup_6.x | bash - && apt-get install nodejs -y – Finally, install node.
  • RUN npm install webpack -g – Now, onto the good stuff. Here we’re installing the webpack CLI library using npm. It’s what we’re going to execute to transpile our code.
  • RUN npm install – Before we do any transpiling, however, we’ll need to make sure all of our dependencies are installed properly.
  • webpack --watch --watch-polling – Now that everything’s ready to go, we can get webpack to watch for code changes in our directory and transpile. Note that we have to use polling here, as file system watches are disabled over network shares, which is how the container mounts our app directory. Another note: you’ll have to make some changes to your webpack.config.js to allow for watching with polling (see here).

Now, we want to build our webpack Docker image. We’ll call it my-webpack.

docker build -t my-webpack -f docker.webpack .

If you require more assistance with Dockerfiles, the Docker documentation is excellent.

Running the Docker Images

So now we’ve got two Docker images compiled: nginx and webpack. We’re not done, however, as we need to apply these images onto containers. We do that with the docker run command. Here’s the run command for the two containers.

docker run --name my-nginx-container -p 8080:8080 -v wwwroot:/wwwroot my-nginx
docker run --name my-webpack-container -p 35729:35729 -v ~/path/to/code:/app -v /app/node_modules -v wwwroot:/wwwroot my-webpack

I’ll explain what’s going on with the nginx container’s run command, as they’re both pretty similar. First, the --name indicates the name of the container once it’s up and running. It’s not required, so if you omit it, Docker will generate a readable name for you (you can view running containers with a docker ps command). The -p indicates we want to open up the 8080 port in the container, which will be useful for a web server so we can access the served files. The following -v flag tells the run command to mount a volume between the host and container. Any file changes made to either the host or container at this mount point will be reflected in the other. I’ll dive into volumes a little later. Finally, the last portion of the command specifies the name of the image we compiled above.

Once you’ve run these commands, you should verify that the containers are running properly with a docker ps. These commands are verbose to say the least, and you’ll have to run them in conjunction with the build commands each time you make a change to your Dockerfiles. Fortunately, the guys at Docker realized there’s a lot of stuff going on here, so they put together a tool to spin up multiple containers at once.

Docker Compose

Docker Compose allows us to write a configuration file that basically duplicates the command line options given in the run commands for Docker images. The file is typically called docker-compose.yml, and here are the docs for it. For our situation, the file should look like this.

version: '2'
services:
  nginx:
    build:
      context: .
      dockerfile: docker.nginx
    image: my-nginx
    container_name: my-nginx-container
    ports:
      - "8080:8080"
    volumes:
      - wwwroot:/wwwroot
  webpack:
    build:
      context: .
      dockerfile: docker.webpack
    image: my-webpack
    container_name: my-webpack-container
    ports:
      - "35729:35729" // for live reload
    volumes:
      - .:/app
      - /app/node_modules
      - wwwroot:/wwwroot
volumes:
  wwwroot:
    driver: local

So what’s going on here? The first line indicates the version of the Docker Compose configuration. Version two introduces some stuff that obviously wasn’t available in version one. The next is the juicy part, in which we specify the services to spin up or down. Lets run through the nginx configuration section.

  • build – The build section lets us specify to Docker Compose what to do if we need to build the image. We specify the context and Dockerfile, which match the docker build command line arguments from earlier.
  • image – The name of the image to place into a container.
  • container_name – The name of the container.
  • ports – This allows us to forward ports directly to the host. Here, we’re forwarding the 8080 port because that’s where the Nginx instance in our container will be listening.
  • volumes – Here, we specify the volumes to mount. There’s just the one, and it’s a shared volume mounted at /wwwroot. Our nginx.conf from right at the start references this directory as the server root, so it’ll serve up files from here.

After the services section, there’s a volumes part, which tells Docker Compose to create a shared volume that will be used for both of our containers. The idea here is that we want to have our webpack container watch for changes in its app root, transpile them, then push them up to the shared wwwroot volume. From there, the nginx container can serve them to anyone who wants them. The Docker documentation talks at length about volumes if you require more clarification.

Now that we’ve got our docker-compose.yml file all ready to go, we can build and spin up both of our containers at the same time with just one command.

docker-compose up --build -d

The --build flag indicates that we want to build our images if necessary using the build section under each of our services in the config file. The -d parameter tells Docker Compose to run the containers in the background.

Now that our containers are running, you should be able to view your compiled web site at http://192.168.99.100:8080. This is the IP address of the VM that’s been created on your behalf, but it’s not always the same. You can determine what your VM’s IP address is with a docker-machine ip default command.

Conclusion

That’s it! Now you should be able to start coding your javascript app and every time you save a file, webpack will transpile it on the fly in your Docker container and you’ll see the results in the VM. Happy coding!

转载于:https://my.oschina.net/u/2306127/blog/1610006

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
SQLAlchemy 是一个 SQL 工具包和对象关系映射(ORM)库,用于 Python 编程语言。它提供了一个高级的 SQL 工具和对象关系映射工具,允许开发者以 Python 类和对象的形式操作数据库,而无需编写大量的 SQL 语句。SQLAlchemy 建立在 DBAPI 之上,支持多种数据库后端,如 SQLite, MySQL, PostgreSQL 等。 SQLAlchemy 的核心功能: 对象关系映射(ORM): SQLAlchemy 允许开发者使用 Python 类来表示数据库表,使用类的实例表示表中的行。 开发者可以定义类之间的关系(如一对多、多对多),SQLAlchemy 会自动处理这些关系在数据库中的映射。 通过 ORM,开发者可以像操作 Python 对象一样操作数据库,这大大简化了数据库操作的复杂性。 表达式语言: SQLAlchemy 提供了一个丰富的 SQL 表达式语言,允许开发者以 Python 表达式的方式编写复杂的 SQL 查询。 表达式语言提供了对 SQL 语句的灵活控制,同时保持了代码的可读性和可维护性。 数据库引擎和连接池: SQLAlchemy 支持多种数据库后端,并且为每种后端提供了对应的数据库引擎。 它还提供了连接池管理功能,以优化数据库连接的创建、使用和释放。 会话管理: SQLAlchemy 使用会话(Session)来管理对象的持久化状态。 会话提供了一个工作单元(unit of work)和身份映射(identity map)的概念,使得对象的状态管理和查询更加高效。 事件系统: SQLAlchemy 提供了一个事件系统,允许开发者在 ORM 的各个生命周期阶段插入自定义的钩子函数。 这使得开发者可以在对象加载、修改、删除等操作时执行额外的逻辑。
SQLAlchemy 是一个 SQL 工具包和对象关系映射(ORM)库,用于 Python 编程语言。它提供了一个高级的 SQL 工具和对象关系映射工具,允许开发者以 Python 类和对象的形式操作数据库,而无需编写大量的 SQL 语句。SQLAlchemy 建立在 DBAPI 之上,支持多种数据库后端,如 SQLite, MySQL, PostgreSQL 等。 SQLAlchemy 的核心功能: 对象关系映射(ORM): SQLAlchemy 允许开发者使用 Python 类来表示数据库表,使用类的实例表示表中的行。 开发者可以定义类之间的关系(如一对多、多对多),SQLAlchemy 会自动处理这些关系在数据库中的映射。 通过 ORM,开发者可以像操作 Python 对象一样操作数据库,这大大简化了数据库操作的复杂性。 表达式语言: SQLAlchemy 提供了一个丰富的 SQL 表达式语言,允许开发者以 Python 表达式的方式编写复杂的 SQL 查询。 表达式语言提供了对 SQL 语句的灵活控制,同时保持了代码的可读性和可维护性。 数据库引擎和连接池: SQLAlchemy 支持多种数据库后端,并且为每种后端提供了对应的数据库引擎。 它还提供了连接池管理功能,以优化数据库连接的创建、使用和释放。 会话管理: SQLAlchemy 使用会话(Session)来管理对象的持久化状态。 会话提供了一个工作单元(unit of work)和身份映射(identity map)的概念,使得对象的状态管理和查询更加高效。 事件系统: SQLAlchemy 提供了一个事件系统,允许开发者在 ORM 的各个生命周期阶段插入自定义的钩子函数。 这使得开发者可以在对象加载、修改、删除等操作时执行额外的逻辑。
GeoPandas是一个开源的Python库,旨在简化地理空间数据的处理和分析。它结合了Pandas和Shapely的能力,为Python用户提供了一个强大而灵活的工具来处理地理空间数据。以下是关于GeoPandas的详细介绍: 一、GeoPandas的基本概念 1. 定义 GeoPandas是建立在Pandas和Shapely之上的一个Python库,用于处理和分析地理空间数据。 它扩展了Pandas的DataFrame和Series数据结构,允许在其中存储和操作地理空间几何图形。 2. 核心数据结构 GeoDataFrame:GeoPandas的核心数据结构,是Pandas DataFrame的扩展。它包含一个或多个列,其中至少一列是几何列(geometry column),用于存储地理空间几何图形(如点、线、多边形等)。 GeoSeries:GeoPandas中的另一个重要数据结构,类似于Pandas的Series,但用于存储几何图形序列。 二、GeoPandas的功能特性 1. 读取和写入多种地理空间数据格式 GeoPandas支持读取和写入多种常见的地理空间数据格式,包括Shapefile、GeoJSON、PostGIS、KML等。这使得用户可以轻松地从各种数据源中加载地理空间数据,并将处理后的数据保存为所需的格式。 2. 地理空间几何图形的创建、编辑和分析 GeoPandas允许用户创建、编辑和分析地理空间几何图形,包括点、线、多边形等。它提供了丰富的空间操作函数,如缓冲区分析、交集、并集、差集等,使得用户可以方便地进行地理空间数据分析。 3. 数据可视化 GeoPandas内置了数据可视化功能,可以绘制地理空间数据的地图。用户可以使用matplotlib等库来进一步定制地图的样式和布局。 4. 空间连接和空间索引 GeoPandas支持空间连接操作,可以将两个GeoDataFrame按照空间关系(如相交、包含等)进行连接。此外,它还支持空间索引,可以提高地理空间数据查询的效率。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值