Seluk Beluk Jaringan Komputer

声明

本文以谷歌IT支持认证为主,在翻译的原文的时候进行了精炼并加入了自己的理解以及亲身经历

Model Jaringan Lima Lapisan TCP/IP

原文:

To really understand networking, we need to understand all of the components involved. We’re talking about everything from the cables that connect devices to each other to the protocols that these devices use to communicate. There are a bunch of models that help explain how network devices communicate, but in this course, we will focus on a five-layer model. By the end of this lesson, you’ll be able to identify and describe each layer and what purpose it serves. Let’s start at the bottom of our stack, where we have what’s known as the physical layer. The physical layer is a lot like what it sounds. It represents the physical devices that interconnect computers. This includes the specifications for the networking cables and the connectors that join devices together along with specifications describing how signals are sent over these connections. The second layer in our model is known as the data link layer. Some sources will call this layer the network interface or the network access layer. At this layer, we introduce our first protocols. While the physical layer is all about cabling, connectors and sending signals, the data link layer is responsible for defining a common way of interpreting these signals, so network devices can communicate. Lots of protocols exist at the data link layer, but the most common is known as Ethernet, although wireless technologies are becoming more and more popular. Beyond specifying physical layer attributes, the Ethernet standards also define a protocol responsible for getting data to nodes on the same network or link. The third layer, the network layer is also sometimes called the Internet layer. It’s this layer that allows different networks to communicate with each other through devices known as routers. A collection of networks connected together through routers is an internetwork, the most famous of these being the Internet. Hopefully you’ve heard of it. While the data link layer is responsible for getting data across a single link, the network layer is responsible for getting data delivered across a collection of networks. Think of when a device on your home network connects with a server on the Internet. It’s the network layer that helps gets the data between these two locations. The most common protocol used at this layer is known as IP or Internet Protocol. IP is the heart of the Internet and most small networks around the world. Network software is usually divided into client and server categories, with the client application initiating a request for data and the server software answering the request across the network. A single node may be running multiple client or server applications. So, you might run an email program and a web browser, both client applications, on your PC at the same time, and your email and web server might both run on the same server. Even so, emails end up in your email application and web pages end up in your web browser. That’s because our next layer, the transport layer. While the network layer delivers data between two individual nodes, the transport layer sorts out which client and server programs are supposed to get that data. When you heard about our network layer protocol IP, you may have thought of TCP IP, which is a pretty common phrase. That’s because the protocol most commonly used in the fourth layer, the transport layer, is known as TCP or Transmission Control Protocol. While often said together as the phrase TCP IP, to fully understand and troubleshoot networking issues, it’s important to know that they’re entirely different protocols serving different purposes. Other transfer protocols also use IP to get around, including a protocol known as UDP or User Datagram Protocol. The big difference between the two is that TCP provides mechanisms to ensure that data is reliably delivered while UDP does not. Spoiler alert, we will cover differences between the TCP and UDP transfer protocols in more detail later. For now, it’s important to know that the network layer, in our case IP, is responsible for getting data from one node to another. Also, remember that the transport layer, mostly TCP and UDP, is responsible for ensuring that data gets to the right applications running on those nodes. Last but not least, the fifth layer is known as the application layer. There are lots of different protocols at this layer, and as you might have guessed from the name, they are application-specific. Protocols used to allow you to browse the web or send receive email are some common ones. The protocols at play in the application layer will be most familiar to you, since they are ones you probably interacted with directly before even if you didn’t realize it. You can think of layers like different aspects of a package being delivered. The physical layer is the delivery truck and the roads. The data link layer is how the delivery trucks get from one intersection to the next over and over. The network layer identifies which roads need to be taken to get from address A to address B. The transport layer ensures that delivery driver knows how to knock on your door to tell you your package has arrived. And the application layer is the contents of the package itself.

翻译

为了真正理解网络,我们需要了解所有涉及的组件。我们讨论的范围包括连接设备的电缆以及这些设备用于通信的协议。有很多模型可以解释网络设备之间的通信方式,但在这门课程中,我们将重点关注一个五层模型。通过本课程的学习,您将能够识别和描述每一层及其功能。让我们从堆栈的底部开始,即物理层。物理层就像其名称所示。它表示相互连接计算机的物理设备。这包括网络电缆和连接器的规格,以及描述如何通过这些连接发送信号的规格。模型中的第二层被称为数据链路层。有些资料将这一层称为网络接口或网络访问层。在这一层,我们引入了第一个协议。虽然物理层涉及到电缆、连接器和信号发送,但数据链路层负责定义一种通用的信号解释方式,以便网络设备之间可以通信。数据链路层存在许多协议,但最常见的是以太网(Ethernet),尽管无线技术越来越受欢迎。除了指定物理层属性外,以太网标准还定义了一种负责将数据传递给同一网络或链路上的节点的协议。第三层,网络层有时也称为互联网层。正是这一层允许不同的网络通过称为路由器的设备相互通信。通过路由器连接在一起的一组网络称为互联网,其中最著名的就是因特网。希望您听说过它。虽然数据链路层负责在单个链路上传输数据,但网络层负责在一组网络之间传递数据。想象一下,当您的家庭网络上的设备与互联网上的服务器连接时,网络层帮助在这两个位置之间传递数据。在这一层最常用的协议是IP或互联网协议。IP是互联网和全球范围内大多数小型网络的核心。网络软件通常分为客户端和服务器类别,客户端应用程序发起对数据的请求,而服务器软件通过网络回应请求。一个单一的节点可能同时运行多个客户端或服务器应用程序。因此,您可能在您的PC上同时运行电子邮件程序和Web浏览器,这两者都是客户端应用程序,而您的电子邮件和Web服务器可能都运行在同一台服务器上。在网络层之上,我们有第四层,称为传输层。传输层的主要目标是确保端到端通信的可靠性和完整性。传输层协议负责将数据分割成较小的数据包,并在接收端重新组装它们。这一层还负责检测和修复数据传输中的错误,以确保数据的准确传递。最常见的传输层协议是传输控制协议(TCP),它在互联网上广泛使用,特别是在需要可靠数据传输的应用程序中,如网页浏览和文件下载。此外,还有一种称为用户数据报协议(UDP)的传输层协议,它在不需要可靠性的应用程序中使用,例如音频和视频流媒体。
最后,我们来到了堆栈的顶层,即应用层。应用层是与用户交互的网络层,它包含了许多常见的应用程序,例如电子邮件、Web浏览器、文件传输、远程登录等。应用层协议定义了在应用程序之间传输数据的格式和规则。在应用层中,有许多不同的协议,例如超文本传输协议(HTTP)用于Web浏览器,简单邮件传输协议(SMTP)用于电子邮件,文件传输协议(FTP)用于文件传输等。
这个五层模型,即物理层、数据链路层、网络层、传输层和应用层,通常被称为OSI参考模型,是计算机网络中一种常见的组织和描述网络协议和功能的方式。不同的网络设备和协议在这五层中扮演不同的角色,以确保网络通信的可靠性、安全性和效率。

Dasar-Dasar Perangkat Jaringan

原文

Lots of different cables and network devices can be used to allow computers to properly communicate with each other. By the end of this lesson, you’ll be able to identify and describe various networking cables and networking devices. Computer networking is a huge part of the day to day role of many IT specialists. Knowing how to differentiate different network devices will be essential to your success. Let’s start with the most basic component of a wired network. Cables: cables are what connect different devices to each other, allowing data to be transmitted over them. Most network cables used today can be split into two categories: copper and fiber. Copper cables are the most common form of networking cable. They’re made up of multiple pairs of copper wires inside plastic insulator. You may already know that computers communicate in binary, which people represent with ones and zeros. The sending device communicates binary data across these copper wires by changing the voltage between two ranges. The system at the receiving end is able to interpret these voltage changes as binary ones and zeros, which can then be translated into different forms of data. The most common forms of copper twisted pair cables used in networking, are Cat 5, Cat 5e, and Cat 6 cables. These are all shorthand ways of saying category 5 or category 6 cables. These categories have different physical characteristics, like the number of twists in the pair of copper wires that results in different usable lengths and transfer rates. Cat 5 is older and has been mostly replaced by Cat 5e and Cat 6 cables. From the outside they all look about the same and even internally, they’re very similar to the naked eye. The important thing to know is that differences in how the twisted pairs are arranged inside these cables can drastically alter how quickly data can be sent across them and how resistant these signals are to outside interference. Cat 5e cables have mostly replaced those older Cat 5 cables because their internals reduce crosstalk. Crosstalk is when an electrical pulse on one wire is accidentally detected on another wire. The receiving end isn’t able to understand the data causing a network error. Higher level protocols have methods for detecting missing data and asking for the data a second time, but of course this takes up more time. The higher quality specifications of a Cat 5e cable make it less likely that data needs to be retransmitted. That means on average, you can expect more data to be transferred in the same amount of time. Cat 6 cables, following even more strict specification to avoid crosstalk, making those cables more expensive. Cat 6 cables can transfer data faster and more reliably than Cat 5e cables can, but because of their internal arrangement, they have a shorter maximum distance when used at higher speeds. The second primary form of networking cable is known as fiber, short for fiber-optic cables. Fiber cables contain individual optical fibers, which are tiny tubes made out of glass about the width of a human hair. These tubes of glass can transport beams of light. Unlike copper, which uses electrical voltages, fiber cables use pulses of light to represent the ones and zeros of the underlying data. Fiber is even sometimes used specifically in environments where there’s a lot of electromagnetic interference from outside sources because this can impact data being sent across copper wires. Fiber cables can generally transport data quicker than copper cables can, but they’re much more expensive and fragile. Fiber can also transport data over much longer distances than copper can without suffering potential data loss. Now you know a lot more about the pros and cons of fiber cables, but keep in mind, you’ll be way more likely to run into fiber cables in computer data centers than you would in an office or at home.

翻译

许多不同的电缆和网络设备可以用于使计算机之间能够进行正确的通信。通过本课程的学习,您将能够识别和描述各种网络电缆和网络设备。计算机网络是许多IT专业人员日常工作的重要组成部分。了解如何区分不同的网络设备对您的成功至关重要。让我们从有线网络中最基本的组件开始。电缆:电缆是连接不同设备的组成部分,允许数据通过它们进行传输。今天使用的大多数网络电缆可以分为两类:铜缆和光纤。铜缆是最常见的网络电缆形式。它们由多对铜线组成,内部包裹着塑料绝缘层。您可能已经知道,计算机使用二进制进行通信,人们用一和零来表示。发送设备通过改变两个范围之间的电压来在这些铜线上传递二进制数据。接收端的系统能够将这些电压变化解释为二进制的一和零,然后可以将其翻译成不同形式的数据。在网络中使用的最常见的铜双绞线电缆是Cat 5、Cat 5e和Cat 6电缆。这些都是简写方式,表示类别5或类别6电缆。这些类别具有不同的物理特性,如铜线对中的扭转次数,从而导致不同的可用长度和传输速率。Cat 5是较旧的规范,已经在很大程度上被Cat 5e和Cat 6电缆所取代。从外观上看,它们都差不多,甚至在肉眼下内部也很相似。重要的是要知道,在这些电缆内部的双绞线排列方式的差异可能会极大地改变数据在其上传输的速度以及这些信号对外部干扰的抵抗能力。Cat 5e电缆由于其内部减少了串音而取代了旧的Cat 5电缆。串音是一种电流在一根线上被意外地检测到另一根线上的现象。接收端无法理解数据从而导致网络错误。更高级别的协议有检测丢失数据并请求再次传输数据的方法,但这当然会占用更多的时间。Cat 5e电缆的更高质量规范使其不太可能需要重新传输数据。这意味着平均而言,在相同的时间内可以传输更多的数据。Cat 6 网线遵循更严格的规格以避免串扰,因此这些网线更加昂贵。相比 Cat 5e 网线,Cat 6 网线可以更快、更可靠地传输数据,但由于其内部结构,当在较高速度下使用时,其最大传输距离较短。第二种主要的网络电缆类型是光纤,简称光纤电缆。光纤电缆包含多个玻璃制成的细小光纤管,其宽度约为人类头发的宽度。这些玻璃管可以传输光束。与使用电压表示二进制数据的铜缆不同,光纤电缆使用光脉冲来表示底层数据中的二进制1和0。光纤甚至有时专门用于存在外部电磁干扰的环境,因为这可能会影响通过铜缆发送的数据。光纤电缆通常比铜缆更快地传输数据,但它们更加昂贵且易碎。光纤还可以在较长的距离上传输数据,而不会出现潜在的数据丢失。现在您对光纤电缆的优缺点了解更多了,但请记住,在计算机数据中心中更有可能遇到光纤电缆,而在办公室或家庭中可能不太常见。

原文

Hubs and switches are the primary devices used to connect computers on a single network, usually referred to as a LAN, or local area network. But we often want to send or receive data to computers on other networks, this is where routers come into play. A router is a device that knows how to forward data between independent networks. While a hub is a layer 1 device and a switch is a layer 2 device, a router operates at layer 3, a network layer. Just like a switch can inspect Ethernet data to determine where to send things, a router can inspect IP data to determine where to send things. Routers store internal tables containing information about how to route traffic between lots of different networks all over the world. The most common type of router you’ll see is one for a home network or a small office. These devices generally don’t have very detailed routing tables. The purpose of these routers is mainly just to take traffic originating from inside the home or office LAN and to forward it along to the ISP, or Internet service provider. Once traffic is at the ISP, a way more sophisticated type of router takes over. These core routers form the backbone of the Internet, and are directly responsible for how we send and receive data all over the Internet every single day. Core ISP routers don’t just handle a lot more traffic than a home or small office router, they also have to deal with much more complexity in making decisions about where to send traffic. A core router usually has many different connections to many other routers. Routers share data with each other via a protocol known as BGP, or border gateway protocol, that let’s them learn about the most optimal paths to forward traffic. When you open a web browser and load a web page, the traffic between computers and the web servers could have traveled over dozens of different routers. The Internet is incredibly large and complicated, and routers are global guides for getting traffic to the right places.

翻译

集线器和交换机是用于连接位于同一网络(通常称为局域网或LAN)上的计算机的主要设备。但是,我们经常需要发送或接收来自其他网络的数据,这就是路由器的作用。路由器是一种可以在独立网络之间转发数据的设备。虽然集线器是一种第一层(物理层)设备,交换机是一种第二层(数据链路层)设备,而路由器则在第三层(网络层)进行操作。就像交换机可以检查以太网数据以确定将数据发送到何处一样,路由器可以检查IP数据以确定将数据发送到何处。路由器存储包含有关如何在全球范围内路由流量的内部表。您可能经常看到的最常见类型的路由器是家庭网络或小型办公室中使用的路由器。这些设备通常没有非常详细的路由表。这些路由器的主要目的只是将源自家庭或办公室局域网的流量转发到ISP(互联网服务提供商)。一旦流量到达ISP,更复杂类型的路由器接管。这些核心路由器构成了互联网的骨干网,并直接负责我们每天在互联网上发送和接收数据的方式。核心ISP路由器不仅处理比家庭或小型办公室路由器更多的流量,而且还必须处理更复杂的决策,确定流量的发送位置。核心路由器通常与许多其他路由器连接在一起。路由器之间通过称为边界网关协议(BGP)的协议共享数据,从而了解转发流量的最佳路径。当您打开Web浏览器并加载Web页面时,计算机和Web服务器之间的流量可能会经过许多不同的路由器。互联网非常庞大且复杂,而路由器是将流量传送到正确位置的全球指南。

Ethernet dan Alamat MAC

原文

Wireless and cellular internet access are quickly becoming some of the most common ways to connect computing devices to networks, and it’s probably how you’re connected right now. So you might be surprised to hear that traditional cable networks are still the most common option you find in the workplace and definitely in the data center. The protocol most widely used to send data across individual links is known as Ethernet. Ethernet and the data link layer provide a means for software at higher levels of the stack to send and receive data. One of the primary purposes of this layer is to essentially abstract away the need for any other layers to care about the physical layer and what hardware is in use. By dumping this responsibility on the data link layer, the Internet, transport and application layers can all operate the same no matter how the device they’re running on is connected. So, for example, your web browser doesn’t need to know if it’s running on a device connected via a twisted pair or a wireless connection. It just needs the underlying layers to send and receive data for it. By the end of this lesson, you’ll be able to explain what MAC addresses are and how they’re used to identify computers. You’ll also know how to describe the various components that make up an Ethernet frame. And you’ll be able to differentiate between unicast, multicast and broadcast addresses. Lastly, you’ll be able to explain how cyclical redundancy checks help ensure the integrity of data sent via Ethernet. Understanding these concepts will help you troubleshoot a variety of problems as an IT support specialist. Warning: a history lesson on old-school technology is headed your way. Here it goes. Ethernet is a fairly old technology. It first came into being in 1980 and saw its first fully polished standardization in 1983. Since then, a few changes have been introduced primarily in order to support ever-increasing bandwidth needs. For the most part though, the Ethernet in use today is comparable to the Ethernet standards as first published all those years ago. In 1983, computer networking was totally different than it is today. One of the notable differences in land topology was that the switch or switchable hub hadn’t been invented yet. This meant that frequently, many or all devices on a network shared a single collision domain. You might remember from our discussion about hubs and switches that a collision domain is a network segment where only one device can speak at a time. This is because all data in a collision domain is sent to all the nodes connected to it. If two computers were to send data across the wire at the same time, this would result in literal collisions of the electrical current representing our ones and zeros, leaving the end result unintelligible. Ethernet, as a protocol, solved this problem by using a technique known as carrier sense multiple access with collision detection. Doesn’t exactly roll off the tongue. We generally abbreviate this to CSMA/CD. CSMA/CD is used to determine when the communications channels are clear and when the device is free to transmit data. The way CSMA/CD works is actually pretty simple. If there’s no data currently being transmitted on the network segment, a node will feel free to send data. If it turns out that two or more computers end up trying to send data at the same time, the computers detect this collision and stop sending data. Each device involved with the collision then waits a random interval of time before trying to send data again. This random interval helps to prevent all the computers involved in the collision from colliding again the next time they try to transmit anything. When a network segment is a collision domain, it means that all devices on that segment receive all communication across the entire segment. This means we need a way to identify which node the transmission was actually meant for. This is where something known as a media access control address or MAC address comes into play. A MAC address is a globally unique identifier attached to an individual network interface. It’s a 48-bit number normally represented by six groupings of two hexadecimal numbers. Just like how binary is a way to represent numbers with only two digits, hexadecimal is a way to represent numbers using 16 digits. Since we don’t have numerals to represent any individual digit larger than nine, hexadecimal numbers employed the letters A, B, C, D, E, and F to represent the numbers 10, 11, 12, 13, 14, and 15. Another way to reference each group of numbers in a MAC address is an octet. An octet, in computer networking, is any number that can be represented by 8 bits. In this case, two hexadecimal digits can represent the same numbers that 8 bits can. Now, you may have noticed that we mentioned that MAC addresses are globally unique, which might have left you wondering how that could possibly be. The short answer is that a 48-bit number is much larger than you might expect. The total number of a possible MAC addresses that could exist is 2 to the power 48 or 281,474,976,710,656 unique possibilities. That’s a whole lot of possibilities. A MAC address is split into two sections. The first three octets of a MAC address are known as the organizationally unique identifier or OUI. These are assigned to individual hardware manufacturers by the IEEE or the Institute of Electrical and Electronics Engineers. This is a useful bit of information to keeping your back pocket because it means that you can always identify the manufacturer of a network interface purely by its MAC address. The last three octets of MAC address can be assigned in any way that the manufacturer would like with the condition that they only assign each possible address once to keep all MAC addresses globally unique. Ethernet uses MAC addresses to ensure that the data it sends has both an address for the machine that sent the transmission, as well as the one that the transmission was intended for. In this way, even on a network segment, acting as a single collision domain, each node on that network knows when traffic is intended for it.

翻译

无线和蜂窝互联网接入迅速成为将计算设备连接到网络的最常见方式之一,这可能也是您目前连接的方式。因此,您可能会惊讶地听到,在工作场所和数据中心中,传统的电缆网络仍然是最常见的选项。用于在单个链路上发送数据的协议被称为以太网。以太网和数据链路层提供了一种使较高层次的软件能够发送和接收数据的方式。这个层次的主要目的之一是在本质上将对其他层次关心物理层和正在使用的硬件的需求抽象出来。通过将这一责任转嫁给数据链路层,互联网、传输和应用层可以在任何设备上以相同的方式操作,无论它们是如何连接的。因此,例如,您的网络浏览器不需要知道它是否在通过双绞线或无线连接的设备上运行。它只需要底层的层次来发送和接收数据。在本课程结束时,您将能够解释什么是MAC地址以及如何使用它们来标识计算机。您还将了解如何描述组成以太网帧的各种组件。您将能够区分单播、组播和广播地址。最后,您将能够解释循环冗余检查如何帮助确保通过以太网发送的数据的完整性。了解这些概念将帮助您作为IT支持专家解决各种问题。警告:关于老式技术的历史课程即将来临。以下是它的内容。以太网是一种相当古老的技术。它首次诞生于1980年,并在1983年首次完全标准化。从那时以来,一些变化已经引入,主要是为了支持不断增长的带宽需求。但在很大程度上,今天使用的以太网与那些年前首次发布的以太网标准相当。在1983年,计算机网络与今天完全不同。当时的局域网拓扑结构中一个显着的不同之处是,交换机或可切换的集线器尚未发明。这意味着经常情况下,网络上的许多或所有设备共享一个单一的碰撞域。您可能还记得我们关于集线器和交换机的讨论,碰撞域是一个网络段,其中只有一个设备可以同时通话。这是因为碰撞域中的所有数据都会发送到连接到它的所有节点。

Datagram dan Enkapsulasi IP

原文

Just like all the data packets at the Ethernet layer have a specific name, Ethernet frames, so do packets at the network layer. Under the IP protocol, a packet is usually referred to as an IP datagram. Just like any Ethernet frame, an IP datagram is a highly structured series of fields that are strictly defined. The two primary sections of an IP datagram are the header and the payload. You’ll notice that an IP datagram header contains a lot more data than an Ethernet frame header does. The very first field is four bits, and indicates what version of Internet protocol is being used. The most common version of IP is version four or IPv4. Version six or IPv6, is rapidly seeing more widespread adoption, but we’ll cover that in a later module. After the version field, we have the Header Length field. This is also a four bit field that declares how long the entire header is. This is almost always 20 bytes in length when dealing with IPv4. In fact, 20 bytes is the minimum length of an IP header. You couldn’t fit all the data you need for a properly formatted IP header in any less space. Next, we have the Service Type field. These eight bits can be used to specify details about quality of service or QoS technologies. The important takeaway about QoS is that there are services that allow routers to make decisions about which IP datagram may be more important than others. The next field is a 16 bit field, known as the Total Length field. It’s used for exactly what it sounds like; to indicate the total length of the IP datagram it’s attached to. The identification field, is a 16-bit number that’s used to group messages together. IP datagrams have a maximum size and you might already be able to figure out what that is. Since the Total Length field is 16 bits, and this field indicates the size of an individual datagram, the maximum size of a single datagram is the largest number you can represent with 16 bits: 65,535. If the total amount of data that needs to be sent is larger than what can fit in a single datagram, the IP layer needs to split this data up into many individual packets. When this happens, the identification field is used so that the receiving end understands that every packet with the same value in that field is part of the same transmission. Next up, we have two closely related fields. The flag field and the Fragmentation Offset field. The flag field is used to indicate if a datagram is allowed to be fragmented, or to indicate that the datagram has already been fragmented. Fragmentation is the process of taking a single IP datagram and splitting it up into several smaller datagrams. While most networks operate with similar settings in terms of what size an IP datagram is allowed to be, sometimes, this could be configured differently. If a datagram has to cross from a network allowing a larger datagram size to one with a smaller datagram size, the datagram would have to be fragmented into smaller ones. The fragmentation offset field contains values used by the receiving end to take all the parts of a fragmented packet and put them back together in the correct order. Let’s move along to The Time to Live or TTL field. This field is an 8-bit field that indicates how many router hops a datagram can traverse before it’s thrown away. Every time a datagram reaches a new router, that router decrements the TTL field by one. Once this value reaches zero, a router knows it doesn’t have to forward the datagram any further. The main purpose of this field is to make sure that when there’s a misconfiguration in routing that causes an endless loop, datagrams don’t spend all eternity trying to reach their destination. An endless loop could be when router A thinks router B is the next hop, and router B thinks router A is the next hop, spoiler alert. In an upcoming module, you’ll learn that the TTL field has valuable troubleshooting qualities, but secrets like these are only released to those who keep going. After the TTL field, you’ll find the Protocol field. This is another 8-bit field that contains data about what transport layer protocol is being used. The most common transport layer protocols are TCP and UDP, and we’ll cover both of those in detail in the next few lessons. So next, we find the header checksum field. This field is a checksum of the contents of the entire IP datagram header. It functions very much like the Ethernet checksum field we discussed in the last module. Since the TTL field has to be recomputed at every router that a datagram touches, the checksum field necessarily changes, too. After all of that, we finally get to two very important fields, the source and destination IP address fields. Remember that an IP address is a 32 bit number so, it should come as no surprise that these fields are each 32 bits long. Up next, we have the IP options field. This is an optional field and is used to set special characteristics for datagrams primarily used for testing purposes. The IP options field is usually followed by a padding field. Since the IP options field is both optional and variable in length, the padding field is just a series of zeros used to ensure the header is the correct total size. Now that you know about all of the parts of an IP datagram, you might wonder how this relates to what we’ve learned so far. You might remember that in our breakdown of an Ethernet frame, we mentioned a section we described as the data payload section. This is exactly what the IP datagram is, and this process is known as encapsulation. The entire contents of an IP datagram are encapsulated as the payload of an Ethernet frame. You might have picked up on the fact that our IP datagram also has a payload section. The contents of this payload are the entirety of a TCP or UDP packet which we’ll cover later. Hopefully, this helps you better understand why we talk about networking in terms of layers. Each layer is needed for the one above it.

翻译

就像以太网层中的所有数据包都有一个特定的名称,以太网帧一样,网络层中的数据包也有特定的名称。在IP协议下,一个数据包通常被称为IP数据报。就像任何以太网帧一样,IP数据报是一个高度结构化的字段序列,严格定义了各个字段的用途。IP数据报的两个主要部分是报头(header)和有效载荷(payload)。你会注意到,IP数据报的报头比以太网帧的报头包含更多的数据。第一个字段是四位长,用于指示正在使用的Internet协议的版本。IP最常见的版本是版本四或IPv4。版本六或IPv6正在迅速被广泛采用,但我们将在后续的模块中涵盖这部分内容。在版本字段之后,我们有报头长度字段。这也是一个四位长的字段,用于声明整个报头的长度。在处理IPv4时,这个字段通常为20字节。事实上,20字节是IP报头的最小长度。你无法在更短的空间中容纳一个格式正确的IP报头所需的所有数据。接下来,我们有服务类型字段。这八位长的字段可以用于指定有关服务质量(Quality of Service,QoS)技术的详细信息。关于QoS的重要内容是,有些服务允许路由器决定哪个IP数据报可能比其他数据报更重要。接下来的字段是一个16位长的字段,称为总长度字段。它用于表示所附加的IP数据报的总长度。标识字段是一个16位长的数字,用于将消息分组在一起。IP数据报有最大大小,你可能已经能够推测出这个大小是多少了。由于总长度字段是16位长的,且此字段表示单个数据报的大小,因此单个数据报的最大大小是使用16位可以表示的最大数字:65535。如果需要发送的总数据量大于单个数据报可以容纳的大小,IP层需要将这些数据分割成许多单独的数据包。当发生这种情况时,标识字段用于让接收端了解具有相同值的每个数据包都是同一传输的一部分。接下来,我们有两个密切相关的字段,标志字段和分片偏移字段。标志字段用于指示数据报是否允许分片,或者指示数据报是否已经被分片。分片是将单个IP数据报拆分成若干较小的数据报的过程。虽然大多数网络在允许的IP数据报大小方面设置类似的设置,但有时可能会配置得不同

原文

In the most basic of terms, subnetting is the process of taking a large network and splitting it up into many individual smaller subnetworks or subnets. By the end of this lesson, you’ll be able to explain why subnetting is necessary and describe how subnet masks extend what’s possible with just network and host IDs. You’ll also be able to discuss how a technique known as CIDR allows for even more flexibility than plain subnetting. Lastly, you’ll be able to apply some basic binary math techniques to better understand how all of this works. Incorrect subnetting setups are a common problem you might run into as an IT support specialist, so it’s important to have a strong understanding of how this works. That’s a lot, so let’s dive in. As you might remember from the last lesson, address classes give us a way to break the total global IP space into discrete networks. If you want to communicate with the IP address 9.100.100.100, core routers on the Internet know that this IP belongs to the 9.0.0.0 Class A Network. They then route the message to the gateway router responsible for the network by looking at the network ID. A gateway router specifically serves as the entry and exit path to a certain network. You can contrast this with core internet routers, which might only speak to other core routers.
Once your packet gets to the gateway router for the 9.0.0.o Class A network, that router is now responsible for getting that data to the proper system by looking at the host ID. This all makes sense until you remember that a single Class A network contains 16,777,216 individual IPs. That’s just way too many devices to connect to the same router. This is where subnetting comes in. With subnets you can split your large network up into many smaller ones. These individual subnets will all have their own gateway routers serving as the ingress and egress point for each subnet.

翻译:

在最基本的术语中,子网划分是将一个大型网络分割成许多个小的子网或子网络的过程。通过本课程的学习,您将能够解释为什么需要子网划分,并描述子网掩码如何扩展了仅仅使用网络和主机ID所能实现的功能。您还将能够讨论CIDR技术如何比普通的子网划分提供更大的灵活性。最后,您将能够应用一些基本的二进制数学技巧,以更好地理解这一切是如何工作的。不正确的子网划分设置是您作为IT支持专员可能会遇到的常见问题,因此了解这个过程的工作原理非常重要。这是一项庞大的任务,让我们开始吧。您可能还记得上一节课中提到的地址类别,它们为我们提供了一种将全球IP地址空间划分为离散网络的方法。如果您想与IP地址9.100.100.100通信,互联网核心路由器会知道这个IP地址属于9.0.0.0类A网络,并通过查看网络ID将消息路由到负责该网络的网关路由器。网关路由器专门用作某个网络的入口和出口路径。您可以将其与核心互联网路由器进行对比,后者可能只与其他核心路由器通信。

原文

So far, we’ve learned about network IDs, which are used to identify networks, and host IDs, which are used to identify individual hosts. If we want to split things up even further, and we do, we’ll need to introduce a third concept, the subnet ID. You might remember that an IP address is just a 32-bit number. In a world without subnets, a certain number of these bits are used for the network ID, and a certain number of the bits are used for the host ID. In a world with subnetting, some bits that would normally comprise the host ID are actually used for the subnet ID. With all three of these IDs representable by a single IP address, we now have a single 32-bit number that can be accurately delivered across many different networks. At the internet level, core routers only care about the network ID and use this to send the datagram along to the appropriate gateway router to that network. That gateway router then has some additional information that it can use to send that datagram along to the destination machine or the next router in the path to get there. Finally, the host ID is used by that last router to deliver the datagram to the intended recipient machine. Subnet IDs are calculated via what’s known as a subnet mask. Just like an IP address, subnet masks are 32-bit numbers that are normally written now as four octets in decimal. The easiest way to understand how subnet masks work is to compare one to an IP address. Warning: dense material ahead. We’re about to get into some tough material, but it’s super important to properly understand how subnet masks work because they’re so frequently misunderstood. Subnet masks are often glossed over as magic numbers. People just memorize some of the common ones without fully understanding what’s going on behind the scenes. In this course, we’re really trying to ensure that you lead with a well-rounded networking education. So, even though subnet masks can seem tricky at first, stick with it, and you’ll get the hang of it in no time. Just know that in the next video, we’ll be covering some additional basics of binary math. Feel free to watch this video a second or third time after reviewing the material. Go at your own pace, and you’ll get there in the perfect amount of time. Let’s work with the IP address 9.100.100.100 again. You might remember that each part of an IP address is an octet, which means that it consists of eight bits. The number 9 in binary is just 1001. But since each octet needs eight bits, we need to pad it with some zeros in front. As far as an IP address is concerned, having a number 9 as the first octet is actually represented as 0000 1001. Similarly, the numeral 100 as an eight-bit number is 0110 0100. So, the entire binary representation of the IP address 9.100.100.100 is a lot of ones and zeros. A subnet mask is a binary number that has two sections. The beginning part, which is the mask itself is a string of ones just zeros come after this, the subnet mask, which is the part of the number with all the ones, tells us what we can ignore when computing a host ID. The part with all the zeros tells us what to keep. Let’s use the common subnet mask of 255.255.255.0. This would translate to 24 ones followed by eight zeros. The purpose of the mask or the part that’s all ones is to tell a router what part of an IP address is the subnet ID. You might remember that we already know how to get the network ID for an IP address. For 9.100.100.100, a Class A network, we know that this is just the first octet. This leaves us with the last three octets. Let’s take those remaining octets and imagine them next to the subnet mask in binary form. The numbers in the remaining octets that have a corresponding one in the subnet mask are the subnet ID. The numbers in the remaining octets that have a corresponding zero are the host ID. The size of a subnet is entirely defined by its subnet mask. So for example, with the subnet mask of 255.255.255.0, we know that only the last octet is available for host IDs, regardless of what size the network and subnet IDs are. A single eight-bit number can represent 256 different numbers, or more specifically, the numbers 0-255. This is a good time to point out that, in general, a subnet can usually only contain two less than the total number of host IDs available. Again, using a subnet mask of 255.255.255.0, we know that the octet available for host IDs can contain the numbers 0-255, but zero is generally not used and 255 is normally reserved as a broadcast address for the subnet. This means that, really, only the numbers 1-254 are available for assignment to a host. While this total number less than two approach is almost always true, generally speaking, you’ll refer to the number of host available in a subnet as the entire number. So, even if it’s understood that two addresses aren’t available for assignment, you’d still say that eight bits of host IDs space have 256 addresses available, not 254. This is because those other IPs are still IP addresses, even if they aren’t assigned directly to a node on that subnet. Now, let’s look at a subnet mask that doesn’t draw its boundaries at an entire octet or eight bits of address. The subnet mask 255.255.255.224 would translate to 27 ones followed by five zeros. This means that we have five bits of host ID space or a total of 32 addresses. This brings up a shorthand way of writing subnet masks. Let’s say we’re dealing with our old friend 9.100.100.100 with a subnet mask of 255.255.255.224. Since that subnet mask represents 27 ones followed by five zeros, a quicker way of referencing this is with the notation /27. The entire IP and subnet mask can be written now as 9.100.100.100/27. Neither notation is necessarily more common than the other, so it’s important to understand both. That was a lot. Make sure to go back and watch this video again if you need a refresher, or if you’re a total wiz, you can move on to the next video on basic binary math. I’ll see you there or maybe here.

翻译

到目前为止,我们已经学习了网络ID和主机ID这两个概念,用于标识网络和标识单个主机。如果我们想进一步划分,我们需要引入第三个概念,子网ID。你可能还记得,IP地址只是一个32位的数字。在没有子网的世界中,一些位用于网络ID,一些位用于主机ID。在使用子网的世界中,一些本应用于主机ID的位实际上被用于子网ID。现在,通过一个单一的32位数字,我们可以准确地表示这三个ID,从而在许多不同的网络中传递数据包。在互联网层级上,核心路由器只关心网络ID,并使用它将数据包发送到适当的网关路由器。网关路由器然后可以使用一些附加信息将数据包发送到目标计算机或路径中的下一个路由器。最后,主机ID由最后一个路由器用于将数据包传递给预期的接收者机器。子网ID是通过子网掩码计算的。就像IP地址一样,子网掩码是32位的数字,现在通常以四个十进制的八位字节写入。理解子网掩码的最简单方法是将其与IP地址进行比较。警告:接下来将涉及一些密集的材料。我们即将进入一些复杂的内容,但是正确理解子网掩码的工作原理非常重要,因为它们经常被误解。子网掩码通常被视为神秘的数字。人们只是记住一些常见的子网掩码,却没有完全理解背后的原理。在这门课程中,我们真的希望确保你具备全面的网络教育。因此,尽管子网掩码一开始可能看起来很棘手,但请坚持下去,你会很快掌握它。在复习了材料后,随时可以再次观看这个视频。按照自己的节奏,你会在合适的时间掌握它的。让我们再次使用IP地址9.100.100.100进行示范。你可能还记得,IP地址的每一部分都是一个八位字节。数字9在二进制中只是1001。但是由于每个八位字节需要八位,我们需要在前面填充一些零。就IP地址而言,将数字9作为第一个八位字节表示为0000 1001。类似地,数字100作为一个八位二进制数表示为0110 0100。因此,IP地址9.100.100.100的整个二进制表示方式就是一串零和一

原文

Address classes were the first attempt at splitting up the global Internet IP space. Subnetting was introduced when it became clear that address classes themselves weren’t as efficient way of keeping everything organized. But as the Internet continued to grow, traditional subnetting just couldn’t keep up. With traditional subnetting and the address classes, the network ID is always either 8 bit for class A networks, 16 bit for class B networks, or 24 bit for class C networks. This means that there might only be 254 classing networks in existence, but it also means there are 2,970,152 potential class C networks. That’s a lot of entries in a routing table. To top it all off, the sizing of these networks aren’t always appropriate for the needs of most businesses. 254 hosts in a class C network is too small for many use cases, but the 65,534 hosts available for use in a class B network is often way too large. Many companies ended up with various adjoining class C networks to meet their needs. That meant that routing tables ended up with a bunch of entries for a bunch of class C networks that were all actually being routed to the same place. This is where CIDR or classless inter-domain routing comes into play. CIDR is an even more flexible approach to describing blocks of IP addresses. It expands on the concept of subnetting by using subnet masks to demarcate networks. To demarcate something means to set something off. When discussing computer networking, you’ll often hear the term demarcation point to describe where one network or system ends and another one begins. In our previous model, we relied on a network ID, subnet ID, and host ID to deliver an IP datagram to the correct location. With CIDR, the network ID and subnet ID are combined into one. CIDR is where we get this shorthand slash notation that we discussed in the earlier video on subnetting. This slash notation is also known as CIDR notation. CIDR basically just abandons the concept of address classes entirely, allowing an address to be defined by only two Individual IDs. Let’s take 9.100.100.100 with a net mask of 255.255.255.0. Remember, this can also be written as 9.100.100.100/24. In a world where we no longer care about the address class of this IP, all we need is what the network mask tells us to determine the network ID. In this case, that would be 9.100.100, the host ID remains the same. This practice not only simplifies how routers and other network devices need to think about parts of an IP address, but it also allows for more arbitrary network sizes. Before, network sizes were static. Think only class A, class B or, class C, and only subnets could be of different sizes. CIDR allows for networks themselves to be differing sizes. Before this, if a company needed more addresses than a single class C could provide, they need an entire second class C. With CIDR, they could combine that address space into one contiguous chunk with a net mask of /23 or 255.255.254.0. This means, that routers now only need to know one entry in their routing table to deliver traffic to these addresses instead of two. It’s also important to call out that you get additional available host IDs out of this practice. Remember that you always lose two host IDs per network. So, if a /24 network has two to the eight or 256 potential hosts, you really only have 256 minus two, or 254 available IPs to assign. If you need two networks of this size, you have a total of 254 plus 254 or 508 hosts. A single /23 network, on the other hand, is two to the nine or 512. 512 minus two, 510 hosts. Take a second and lock that into your memory. Then when you’re ready, we have a short ungraded quiz for you before we move on to routing in the next lesson.

翻译

地址类是最初对全球互联网IP地址空间进行划分的尝试。子网划分是在明确地址类本身并不是一种有效的组织方式后引入的。但随着互联网的不断增长,传统的子网划分已经无法跟上。在传统的子网划分和地址类中,网络ID始终为8位用于A类网络,16位用于B类网络,或24位用于C类网络。这意味着可能只存在254个经典网络,但也意味着可能存在2,970,152个潜在的C类网络。这在路由表中是大量的条目。更糟糕的是,这些网络的大小并不总是适合大多数企业的需求。C类网络中的254个主机对许多用途来说太小了,而B类网络中可用于使用的65,534个主机通常过大。许多公司最终需要多个相邻的C类网络来满足其需求。这意味着路由表中出现了许多针对实际上都被路由到同一位置的C类网络的条目。这就是无类域间路由(CIDR)的作用。CIDR是一种更加灵活的描述IP地址块的方法。它通过使用子网掩码来划分网络来扩展子网划分的概念。在讨论计算机网络时,您经常会听到术语“划界点”来描述一个网络或系统的结束和另一个网络或系统的开始。在之前的模型中,我们依赖网络ID、子网ID和主机ID来将IP数据报传递到正确的位置。使用CIDR,网络ID和子网ID被合并为一个。CIDR是我们在之前关于子网划分的视频中讨论的那种斜杠缩写符号的来源。这种斜杠表示法也称为CIDR表示法。CIDR基本上完全放弃了地址类的概念,只允许通过两个单独的ID来定义地址。让我们以9.100.100.100和子网掩码255.255.255.0为例。记住,这也可以写成9.100.100.100/24。在不再关心此IP地址的地址类的世界中,我们只需要子网掩码来确定网络ID。在这种情况下,网络ID将为9.100.100,主机ID保持不变。这种做法不仅简化了路由器和其他网络设备在考虑IP地址的部分时需要思考的方式,还允许更为任意的网络大小。在以前,网络大小是静态的。

Firewall

原文

You know what network device we haven’t mentioned that you’re probably super familiar with? A firewall. A firewall is just a device that blocks traffic that meets certain criteria. Firewalls are a critical concept to keeping a network secure since they are the primary way you can stop traffic you don’t want from entering a network.
Firewalls can actually operate at lots of different layers of the network. There are firewalls that can perform inspection of application layer traffic, and firewalls that primarily deal with blocking ranges of IP addresses. The reason we cover firewalls here is that they’re most commonly used at the transportation layer.
Firewalls that operate at the transportation layer will generally have a configuration that enables them to block traffic to certain ports while allowing traffic to other ports. Let’s imagine a simple small business network. The small business might have one server which hosts multiple network services. This server might have a web server that hosts the company’s website, while also serving as the file server for a confidential internal document.
A firewall placed at the perimeter of the network could be configured to allow anyone to send traffic to port 80 in order to view the web page. At the same time, it could block all access for external IPs to any other port. So that no one outside of the local area network could access the file server.
Firewalls are sometimes independent network devices, but it’s really better to think of them as a program that can run anywhere. For many companies and almost all home users, the functionality of a router and a firewall is performed by the same device. And firewalls can run on individual hosts instead of being a network device. All major modern operating systems have firewall functionality built-in. That way, blocking or allowing traffic to various ports and therefore to specific services can be performed at the host level as well. Up next, firing up your brain for a short quiz.

翻译

网络防火墙是一种网络设备,用于阻止符合特定条件的流量进入网络。防火墙可以在网络的不同层次上操作,包括应用层和传输层。在传输层操作的防火墙通常配置为允许某些端口的流量通过,同时阻止其他端口的流量。例如,在小型企业网络中,可以配置在网络边界处的防火墙允许外部IP地址访问端口80以查看网页,但阻止外部IP地址访问其他端口,以保护内部文件服务器的安全性。防火墙可以是独立的网络设备,也可以是运行在路由器或主机上的软件程序。现代操作系统通常都内置了防火墙功能,可以在主机级别进行端口和服务的访问控制。

DNS

原文

DNS is a great example of an application layer service that uses UDP for the transport layer instead of TCP. This can be broken down into a few simple reasons. Remember that the biggest difference between TCP and UDP is that UDP is connectionless. This means there is no setup or teardown of a connection. So much less traffic needs to be transmitted overall. A single DNS request and its response can usually fit inside of a single UDP datagram, making it an ideal candidate for a connectionless protocol. It’s also worth calling out that DNS can generate a lot of traffic. It’s true that caches of DNS entries are stored both on local machines and caching name servers, but it’s also true that if the full resolution needs to be processed, we’re talking about a lot more traffic. Let’s see what it would look like for a full DNS lookup to take place via TCP. First, the host that’s making the DNS resolution request would send a SYN packet to the local name server on port 53, which is the port that DNS listens on. This name server would then need to respond with a SYN ACK packet, that means the original host would have to respond with an ACK in order to complete the three-way-handshake. That’s three packets. Now, that the connection has been established, the original host would have to send the actual request. I’d like the IP address for food accomplice. When it receives this request, the name server would have to respond with another ACK. I got your request for food.com. We’re up to five packets sent now. In our scenario, the first caching name server doesn’t have anything cached for food.com. So, it needs to talk to a root name server to find out who’s responsible for the.comTLD. This would require a three-way-handshake. The actual request, the ACK of the request, the response, and then the ACK of the response. Finally, the connection would have to be closed via a four-way-handshake. That’s 11 more packets or 16 total. Now that the recursive name server has the correct TLD name server, it needs to repeat that entire process to discover the proper authoritative name server. That’s 11 more packets, bringing us up to 27 so far. Finally, the recursive name server would have to repeat the entire process one more time while talking to the authoritative name server in order to actually get the IP of food.com. This is 11 more packets for a running total of 38. Now that the local name server finally has the IP address of food.com, it can finally respond to the initial request. A response to the DNS resolver that originally made the request, and then this computer sends an ACK back to confirm that it received the response. That’s two more packets, putting us at 40. Finally, the TCP connection needs to be closed via a four-way-handshake. This brings us to a grand total of 44 packets at the minimum in order for a fully recursive DNS request to be fulfilled via TCP. 44 packets isn’t really a huge number in terms of how fast modern networks operate. But it adds up fast as you can see. Remember that DNS traffic is just a precursor to actual traffic. A computer almost always performs a DNS lookup because it needs to know the IP of the domain name in order to send additional data, not just because it’s curious. Now, let’s check out how this would look with UDP. Spoiler alert, it doesn’t take as many packets. The original computer sends a UDP packet to its local name server on port 53 asking for the IP for food.com, that’s one packet. The local name server acts as a recursive server and sends up a UDP packet to the root server which sends a response containing the proper TLD name server, that’s three packets. The recursive name server sends a packet to the TLD server and receives back a response containing the correct authoritative server. We’re now at five packets. Next, the recursive name server sends its final request to the authoritative name server which sends a response containing the IP for food.com. That’s seven packets. Finally, the local name server responds to the DNS resolver that made the request in the first place with the IP for food.com. That brings us to a grand total of eight packets. See, way less packets. You can see now how much overhead TCP really requires. And for something as simple as DNS, it’s just not needed. It’s the perfect example for why protocols like UDP exist in addition to the more robust TCP. You might be wondering how error recovery plays into this, since UDP doesn’t have any. The answer is pretty simple. The DNS resolver just asks again if it doesn’t get a response. Basically, the same functionality that TCP provides at the transport layer is provided by DNS at the application layer in the most simple manner. A DNS server never needs to care about doing anything but responding to incoming lookups, and a DNS resolver simply needs to perform lookups and repeat them if they don’t succeed. A real showcase of the simplicity of both DNS and UDP. I should call out that DNS over TCP does in fact exist and is also in use all over. As the Web has gotten more complex, it’s no longer the case that all DNS lookup responses can fit in a single UDP datagram. In these situations, a DNS name server would respond with a packet explaining that the response is too large. The DNS client would then establish a TCP connection in order to perform the lookup.

翻译

DNS是一个应用层服务的典型例子,它使用UDP而不是TCP作为传输层协议。这主要有几个简单的原因。首先,UDP是无连接的,不需要建立或拆除连接,因此传输的流量较少。一个单独的DNS请求和响应通常可以放入一个UDP数据报中,使其成为无连接协议的理想选择。其次,DNS生成的流量可能很大。尽管DNS条目的缓存通常存储在本地计算机和缓存名称服务器上,但如果需要完整的解析,流量会变得更多。如果使用TCP进行完整的DNS查找,会有很多的数据包交换过程,包括三次握手和四次挥手,导致总体流量较大。相比之下,使用UDP可以大大减少数据包的数量,简化了流量交换的过程。此外,DNS是一个简单的应用层协议,错误恢复功能可以在应用层进行处理,例如重新发送请求,从而弥补了UDP协议缺乏错误恢复的特点。需要注意的是,虽然DNS over TCP确实存在并且在某些情况下使用,但随着Web变得越来越复杂,不再是所有DNS查找响应都可以放入单个UDP数据报中。在这些情况下,DNS服务器会回复一个说明响应太大的数据包,DNS客户端会建立TCP连接来执行查找操作。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值