Network final

I. EXPONENTIAL AND POISSON (EXPONENTIALPOISSON IN RESOURCES)
PDF in resources:
• Exponential distribution (inter arrival, service) + Poisson process
• Properties: memoryless (+ proof), minimum interarrival time of multiple exponential processes (distribution,average, probability of arrival from process i), aggregate of exponential processes, sub-sampled exponential process.

A:see in mid


II. QUEUEING AND BUFFERS
Leon-Garcia - Appendix A + Notes:
• M/M/1/K - characterization of the system, number of packets in the system, blocking probability, empty buffer probability, average time in the system (sketch of the proofs, E[N ] and E[T ], memorize formulas only with K=∞).

A:

characterization:M/M/1: This signifies a single-server queue where arrivals follow a Poisson process (first 'M' for 'Markovian' arrivals), service times are exponentially distributed (second 'M'), and there is one server ('1').K: This is the capacity of the system, including both the service and the buffer. If the queue is full (with 'K' customers), new arrivals are blocked and lost.

number of packets in the system:

blocking probability:

pb=ploss=pK=

empty buffer probability:

1-PK

Average waiting time

Proof:


• Erlang B formula - M/M/c/c (no proof).


III. ARCHITECTURE
Not included in final.
IV. APPLICATION LAYER
Kurose:
• 2.1: client-server and peer-to-peer architectures, processes and sockets, transport services, TCP and UDP services, Application layer protocols。

-client-server and peer-to-peer architectures:

Client-server:
        Clients don't communicate with each other
        Server has fixed - well-knowned address (its IP)
        Clients can always request packets from server's IP and server alwasy on to listenn for requests.
        Costly as service provider has to pay for interconnection & bandwidth cost
        Star topology
        Applications: web, FTP, email, Telnet
Peer-to-peer:
        each machine acts as both a client and a server.

        same task 

        as a client: must have some way to find peer acting as a server that is on and has desired content

        as a servermust have some way of advertising availability and content.

        scalable.

        Hybrid architectures

        cost effective,no server infrastructure and server bandwidth

        Applications: file sharing, peer assisted download, Internet Telephony


processes and sockets:

processes:

        It is not actually programs but processes that communicate,A process can be thought of as a program that is running within an end system

        Same end system,processes can communicate with each other with interprocess communication, using rules that are governed by the end system’s operating system.

        processes on two different end systems communicate with each other by exchanging messages across the computer network.

        A sending process creates and sends messages into the network; a receiving process receives these messages and possibly responds by sending messages back

        process that initiates the communication (that is, initially contacts the other process at the beginning of the session) is labeled as the client. The process that waits to be contacted to begin the session is the server

Sockets:

        A process sends messages into, and receives messages from, the network through a software interface called a socket


-TCP and UDP:

UDP:
  Connectionless: send datagrams to IP without setting channels or datapaths - no handshake
  Light weight: minimum protocol mechanism
  Unreliable: the message can get lost without knowing
  Not ordered: order of messages received is random
  No congestion control: application layer will take care of congestion
  Datagrams: individual packets are checked at receiver only
  Unidirectional
  Small packet header overhead: 8 bytes of overhead

TCP:
  Connection-oriented: handshaking is a must to alert client and server to prepare for onslaught of packets.
  Heavy weight: 3 packets are needed to set up the connection
  Reliable: guaranteed of service
  Ordered: order of message is preserved, out-of-order data will be buffered
  Congestion control: TCP handles congestion-control
  Full duplex connections: two-way communication全双工
  Big packet header overhead: 20 bytes of overhead in every segment

UDP is more suited for mutimedia streaming because:
  UDP is stateless, suitable for large number of clients. Transmission delay is less. By design, UDP is unidirectional单向

        Streaming can tolerate small amount of packet loss, thus reliable data transfer is not absolutely critical.
  Real-time applications react very poorly to TCP's congestion control


-Application layer protocols

an application-layer protocol defines:

• The types of messages exchanged, for example, request messages and response messages

• The syntax of the various message types, such as the fields in the message and how the fields are delineated句法

• The semantics of the fields, that is, the meaning of the information in the fields语义

• Rules for determining when and how a process sends messages and responds to messages处理

An application-layer protocol is only one piece of a network application

http is one kind of ...

smtp is one kind of...


• 2.2: architecture, non-persistent and persistent connections (no header format), cookies

-http

purpose:request and transfer a webpage from a server to a user(client-server)

service characteristics:

1.pages requested and sent at random times

2.connected only for duration of  download

performance:

1.loss not ok

2.delay few seconds ok,but flexible

3.throughput higher is better,but flexible

http connection management:

connection initiation :HTTP client first initiates a TCP connection with the server

well-known ports

client browser,server webserver

connction access control(limited resources,tcp)    sever may block a connection

server balancing     sever may redirect a connection


-connection termination:

1.Non-persistent:

        each file requires a seperate connection e.g.(-first request main page,close connection  -if has images,each image open connection request image,close) (costs two RTTs delay).

        workaround: open parallel connections,one for each image.(but server may have limited resources in terms of tcp connections)

2.persistent:

        leaves the TCP connection open after sending response/can send an entire including mutiple images or multiple pages over a single persistent connection between the same client and server. Dont need to wait for the response of request.

        whole HTTP page can be downloaded using a single TCP connection which saves the server’s computational resources.

        closes a connection when it isn't used for  a certain time. or after client is done with server.

        disadvantage:serial objects transfer


-cookies:

        Because e-commerce requires state,but http doesn't keep track of state.So cookies are used to identify users,allow sites to keep track of users’ state.

        Four components:(1. a cookie header line in the HTTP response message.2. request message...3.a cookie file kept on the user's end system and managef by users browesr 4.a back-end database te the website)

• 2.4: architecture, SMTP (operations, skip the message part), comparison with HTTP, mail access protocols,POP3, Imap (functioning, no need to memorize the commands).

-architecture:


-SMTP(simple mail transfer protocol):

        1.user agents allow users to read,reply to,foward,save and compose messages/

        2.sends message to its mail server,message placed in the mail server's outgoing message queue.

        3.when reader wants to read a message,his user agent retrieves the messafe from his mailbox in his mail server.

        conclusion:SMTP is the principal application-layer protocol for Internet electronic mail. It
uses the reliable data transfer service of TCP to transfer mail from the sender’s mail server to the recipient’s mail server. As with most application-layer protocols,
SMTP has two sides: a client side, which executes on the sender’s mail server, and a
server side, which executes on the recipient’s mail server. Both the client and server
sides of SMTP run on every mail server. When a mail server sends mail to other
mail servers, it acts as an SMTP client. When a mail server receives mail from other
mail servers, it acts as an SMTP server.

-SMTP operations:

        (open TCP connection,upload messages,close TCP connection)

        1.sender invoke his user agent,provides information about receiver,compose messgage...

        2.his agent sends the message to his mail server,queue,

    3.client side of SMTP see the message in queue,opens TCP connection to an SMTP server,running on receiver's mail server

        4.handshaking,the SMTP client sends sender's message into the TCP connection

        5.receiver's server.server side of SMTP receives the message.mail server place message in mailbox.

        6.receiver invoke his user agent to read the message at convenience


-Comparison with HTTP:

similarity:

        Both protocols are used to transfer files from one host to another: HTTP transfers files (also called objects) from a Web server to a Web client (typically a browser); SMTP transfers files (that is, e-mail messages) from one mail server to another mail server.

difference:

        1.HTTP pull protocol(TCP connection initiated by machine who want to receive),SMTP push protocol(TCP connection initiated by machine who want to send)

      2.SMTP requires each message, including the body of each message, to be in 7-bit ASCII format.Message has to be encoded into 7-bit ASCII. HTTP data does not impose this restriction.

        3.how a document consisting of text and images (along with possibly other media types) is handled.HTTP encapsulates each object in its own HTTP response message. Internet mail places all of the message’s objects into one message


-mail access protocols:

        mail access uses a client-server architecture—the typical user reads e-mail with a client that executes on the user’s end system.

        Alice’s mail server would dialogue directly with Bob’s PC

       This mail server is shared with other users and is typically maintained by the user’s ISP (for example, university or company)

        Alice’s user agent uses SMTP to push the e-mail message into her mail server, then Alice’s mail server uses SMTP (as an SMTP client) to relay the e-mail message to Bob’s mail server.Why two procedures?without relaying through Alice’s mail server, Alice’s user agent doesn’t have any recourse to an unreachable destination mail server.r. By having Alice first deposit the e-mail in her own mail server, Alice’s mail server can repeatedly try to send the message to Bob’s mail server.

        Bob’s user agent can’t use SMTP to obtain the messages because obtaining the messages is a pull operation, whereas SMTP is a push protocol.


-Post Office Protocol—Version 3 (POP3), Internet Mail Access Protocol (IMAP)

pop3:Post office protocols

        1.messages downloaded from mailbox on server to mailbox on non webmail client.

IMAP:

        1.messages kept in(multiple) server mailboxes.

        2.or moved to mailboxes on non webmail clients

• 2.6: architecture and motivation, bitTorrent (architecture and operations).

bitTorrent:

Architecture and Operations of BitTorrent:

In BitTorrent, the collection of all peers participating in the distribution of a particular file is called a torrent. Peers within a torrent download and upload equal-size chunks of the file from and to each other, typically in sizes of 256 KBytes. A peer initially has no chunks but gradually accumulates more as it participates in the torrent

Trading Algorithm: BitTorrent uses a trading algorithm to manage the exchange of chunks between peers. A peer, like Alice in the example provided, prioritizes peers that supply her data at the highest rate. She measures the rate at which she receives bits from each of her neighbors and identifies the top four from whom she receives data the fastest. These peers are then prioritized for receiving chunks from Alice. This set of four 'unchoked' peers is recalculated every 10 seconds, and an additional neighbor is chosen every 30 seconds

Incentive Mechanism: The trading system in BitTorrent is based on a tit-for-tat incentive mechanism. Each peer tries to trade chunks with a set of peers to maintain a balanced distribution of file chunks within the network. However, peers outside of the selected trading circle ('choked' peers) do not receive any chunks. While this system can be circumvented, BitTorrent remains highly successful, with millions of peers sharing files across hundreds of thousands of torrents​


• 7.1: whole sub-chapter

  1. Role of the Transport Layer: The transport layer is responsible for providing logical communication between application processes running on different hosts. This means that although hosts may be physically distant, the transport layer makes them appear as if they are directly connected. This abstraction allows application processes to send messages to each other without needing to manage the complexities of the physical network infrastructure​​.

  2. Relationship Between Transport and Network Layers: The transport layer is positioned just above the network layer in the network protocol stack. While the transport layer facilitates logical communication between processes on different hosts, the network layer is responsible for logical communication between the hosts themselves​​.

  3. Transport Layer Protocols in the Internet: The Internet employs two main transport-layer protocols:

    • UDP (User Datagram Protocol): This protocol offers an unreliable, connectionless service to applications. It is simple and does not guarantee delivery, order, or integrity of the packets.
    • TCP (Transmission Control Protocol): This provides a reliable, connection-oriented service. It ensures that data is delivered correctly and in order, managing flow control and congestion control to maintain network efficiency​​.
  4. Multiplexing and Demultiplexing: These are crucial functions of the transport layer. Multiplexing involves gathering data from different application processes, encapsulating them into transport-layer segments, and sending them over the network. Demultiplexing is the process of directing incoming transport-layer segments from the network layer to the correct application process. This is essential for a network where multiple applications might be communicating concurrently​

• 7.2: 7.2.1, 7.2.2

7.1 Multimedia Networking Applications

  • Properties of Video (7.1.1):

    • High Bit Rate: Video requires significantly high bandwidth, varying from 100 kbps for low-quality video conferencing to over 3 Mbps for streaming high-definition movies.
    • Compression: Video can be compressed by exploiting spatial and temporal redundancy, trading off quality for a lower bit rate.
    • Multiple Versions: Different quality levels of the same video can be created for various bandwidth environments.
  • Properties of Audio (7.1.2):

    • Lower Bandwidth Requirements: Compared to video, digital audio requires significantly less bandwidth.
    • Sampling and Quantization: Analog audio is converted to digital through sampling and quantization, with a trade-off between quality and bit rate.
    • Compression Techniques: Techniques like MP3 and AAC compress audio to lower bit rates while maintaining quality.
  • Types of Multimedia Network Applications (7.1.3):

    • Streaming Stored Audio/Video: Involves streaming of pre-recorded content like movies or music.
    • Conversational Voice/Video-over-IP: Real-time conversational applications sensitive to delay but tolerant to some data loss.
    • Streaming Live Audio/Video: Similar to traditional broadcasting, but over the Internet.

7.2 Streaming Stored Video

  • UDP Streaming (7.2.1):

    • Uses UDP to stream video at a rate matching the client’s consumption rate.
    • Employs protocols like RTP for video transport and RTSP for control connections.
    • Challenges include variable bandwidth availability and issues with firewalls blocking UDP traffic.
  • HTTP Streaming (7.2.2):

    • Video is stored on an HTTP server and sent over TCP through HTTP responses.
    • Client-side buffering is used to mitigate TCP's variable transmission rate and to enable continuous playback.
    • HTTP streaming is more firewall-friendly and doesn't require a separate control server, making it popular for services like YouTube and Netflix.

Key Takeaways

  • Diverse Requirements: Different multimedia applications have unique service requirements and design issues.
  • Video vs. Audio: Video generally requires higher bandwidth and more complex handling compared to audio.
  • Streaming Techniques: Both UDP and HTTP streaming have their own advantages and challenges, with HTTP streaming becoming more prevalent due to its compatibility with existing web infrastructure and NAT/firewall traversal capabilities.
  • Adaptation and Quality: Both audio and video streaming technologies must balance quality and bandwidth, often creating different versions for different network conditions.


• 7.3: best-effort and QoS (packet loss, delay, jitter), Removing jitter, Recovering packet loss (FEC, interleaving)

  1. Best-Effort IP Service Limitations: The Internet Protocol (IP) offers best-effort service without guarantees on delay bounds or packet loss limits. This poses challenges for real-time applications like VoIP, which are sensitive to packet delay, jitter, and loss.

  2. Packet Loss in VoIP: In a network, packets may be lost due to full router buffers. While TCP offers reliable data transfer and could theoretically eliminate packet loss, its retransmission mechanisms increase end-to-end delay, making it unsuitable for real-time VoIP. Most VoIP applications thus use UDP. Packet loss rates between 1 and 20 percent can be tolerated, depending on voice encoding and loss concealment techniques used.

  3. End-to-End Delay: This includes transmission, processing, and queuing delays in routers, propagation delays in links, and end-system processing delays. For VoIP, delays under 150 milliseconds are typically imperceptible, but delays over 400 milliseconds can severely impact voice conversation quality.

  4. Packet Jitter: This refers to the variation in queuing delays a packet experiences in the network, causing fluctuations in the time taken for packets to travel from source to receiver. Jitter can degrade audio quality if not managed properly.

  5. Removing Jitter at the Receiver for Audio: This can be achieved by using timestamps and a playout delay at the receiver. Two strategies are discussed: fixed playout delay and adaptive playout delay.

  6. Recovering from Packet Loss: VoIP applications use loss recovery schemes like Forward Error Correction (FEC) and interleaving. FEC adds redundant information to the original packet stream to reconstruct lost packets. Interleaving resequences audio data before transmission to mitigate the effect of packet losses.

  7. Error Concealment: Techniques like packet repetition and interpolation are used to produce a replacement for lost packets. These work well for small loss rates and packet sizes.

  8. Case Study - Skype: Skype, a popular VoIP application, uses various codecs for different rates and qualities. It uses FEC for loss recovery and adapts to network conditions. Skype also employs P2P techniques for functions like user location and NAT traversal, and it uses server clusters for video conferencing calls.

• 7.4: RTP (basics, fields)

  • RTP Basics: RTP is commonly used for transporting multimedia formats like PCM, AAC, MP3 for sound, and MPEG and H.263 for video. It runs on top of UDP and is designed for end-to-end, real-time, transfer of streaming media.

  • Packet Encapsulation: RTP encapsulates a media chunk within an RTP packet, which is then encapsulated in a UDP segment for transmission. At the receiving end, the process is reversed to extract the media chunk.

  • Interoperability and No QoS Guarantees: RTP does not guarantee timely delivery, packet delivery, or maintain packet order. Its main advantage is interoperability across different network applications due to its standardized packet structure.

  • RTP Streams: Each source in an RTP session, like a camera or microphone, can have its own independent RTP stream. RTP packets can also be multicast.

  • RTP Packet Header Fields:

    • Payload Type: Indicates the type of encoding used (e.g., PCM for audio).
    • Sequence Number: Used for detecting packet loss and restoring packet sequence.
    • Timestamp: Reflects the sampling instant of the first byte in the RTP data packet, useful for synchronization.
    • Synchronization Source Identifier (SSRC): Unique identifier for each RTP stream.

• 7.5: dimensioning, multiple classes of service (motivation, scheduling, policing), Per-connection QoS:

7.5.1 Network Dimensioning

  • Capacity Provisioning: To support multimedia applications with stringent performance requirements (low delay, jitter, and loss), the network needs sufficient capacity to avoid congestion. This involves determining the right amount of bandwidth (bandwidth provisioning) and designing an optimal network topology (network dimensioning).

  • Traffic Demand Models: Accurate models are needed for both the call level (number of users starting applications) and the packet level (packets generated by ongoing applications). Performance requirements must be well-defined, and models should predict end-to-end performance for given workloads.

  • Economic and Organizational Challenges: A significant barrier to providing adequate network capacity is the cost involved and the willingness of users to pay for enhanced services. Additionally, coordination among multiple ISPs for end-to-end service provision presents organizational challenges.

7.5.2 Multiple Classes of Service

  • Differentiation in Traffic Classes: ISPs might provide different levels of service to different classes of traffic, e.g., higher priority for VoIP over email. This requires mechanisms for packet marking, traffic isolation, and efficient resource utilization.

  • Scheduling Mechanisms: Approaches like First-In-First-Out (FIFO), Priority Queuing, Round Robin, and Weighted Fair Queuing (WFQ) are used to manage how packets are transmitted over network links.

  • Policing Mechanisms: The leaky bucket mechanism is an example used to regulate the rate at which traffic is allowed into the network, based on criteria like average rate, peak rate, and burst size.

7.5.3 Per-Connection QoS Guarantees

  • Resource Reservation and Call Admission: This approach involves explicitly reserving network resources (like bandwidth and buffers) for individual connections to guarantee QoS.

  • Call Setup Signaling: Protocols like RSVP (Resource Reservation Protocol) are used to coordinate resource allocation across routers in a network, ensuring that each session has sufficient resources for its QoS requirements.

  • Implementation Challenges: Despite the potential benefits, deploying per-connection QoS guarantees is complex and costly, and there has been limited widespread implementation. Often, simpler application-level mechanisms combined with adequate network dimensioning provide sufficient service quality.


V. TRANSPORT LAYER
Kurose:
• 3.1: interaction between transport and network layers, overview.

  • Transport Layer Role: It provides logical communication between application processes on different hosts, independent of the physical connection. Transport-layer protocols transform application-layer messages into segments.

  • Implementation: Transport protocols are implemented only in end systems, not in network routers. They process segments received from the network layer and prepare segments for transmission.

  • Difference from Network Layer: The network layer provides logical host-to-host communication, while the transport layer extends this to process-to-process communication. Transport protocols in end systems handle the data transfer between network layer and application processes.

  • Constraints: Services provided by transport protocols are often constrained by the underlying network layer. For instance, if the network layer can't guarantee delay or bandwidth, neither can the transport layer.


• 3.2: connection-oriented and connectionless multiplexing and demultiplexing.

3.2 Connection-Oriented and Connectionless Multiplexing and Demultiplexing

  •  Multiplexing. 1Unique application process for each active source port number.2application for a source port passes IP address,source port number,destination port number to transport layer.3Transport layer put source and destination port numbers in packet header,and passes destination IP add to network layer. (Involves gathering data from multiple sockets, forming segments, and passing them to the network layer.)

  • Demultiplexing involves obtaining destination port number from head and then delivering received segments to the correct socket.

  • UDP (Connectionless): Uses source and destination port numbers for multiplexing/demultiplexing. It's simple; segments are directly passed between network and application layers with minimal processing.

  • TCP (Connection-Oriented): Identified by a four-tuple (source/destination IP addresses and port numbers), ensuring precise data delivery across multiple connections. TCP handles segment organization, ensuring orderly and reliable data transfer.


 

• 3.3: UDP services, pro and cons of connectionless transport

  • UDP Characteristics: Offers minimal transport services, providing multiplexing/demultiplexing and error checking, without reliability or congestion or flow control. It’s efficient for applications that need fast, efficient transmission, like DNS or streaming media.

  • Advantages:

    • Low Overhead: Smaller header compared to TCP.
    • No Connection Establishment: Eliminates delay in setting up connections.
    • No Connection State: Supports more active clients due to less required state information.
    • Control: Gives applications more control over data transmission and timing.
  • Disadvantages:

    • Unreliable: No guarantee of packet delivery, order, or integrity,no retransmit.App layer are responsible for ack or other feedback to source.
    • No Congestion Control: Can contribute to network congestion, as it doesn’t adapt to network state.
    • No Built-in Reliability: Requires additional mechanisms at the application level for reliability.
  • Use Cases: Suitable for real-time applications, simple query-response protocols (like DNS), and applications tolerating some data loss but requiring speed and efficiency.


• 3.4: principles (retransmission, feedback, error detection, numbering, timeouts), stop-and-wait, Go-Back-N,selective repeat.

  1. Retransmission: Essential for handling packet loss. If a packet doesn't reach its destination or is corrupted, it's retransmitted.

  2. Feedback: Utilized in forms like acknowledgments (ACKs) and negative acknowledgments (NAKs) to inform the sender about the status of the sent data.

  3. Error Detection: Involves mechanisms like checksums to detect any errors in the transmitted data.

  4. Numbering: Assigning sequence numbers to packets helps in identifying missing or duplicate packets and arranging them in order.

  5. Timeouts: If an acknowledgment for a packet isn't received within a certain time, it's assumed lost and retransmitted.

  6. Stop-and-Wait Protocol: Involves sending one packet and waiting for its acknowledgment before sending the next.

  7. Go-Back-N Protocol: Allows sending multiple packets before receiving acknowledgments but requires retransmitting all packets from a lost one onwards.

  8. Selective Repeat Protocol: Similar to Go-Back-N but only retransmits the specific lost packets, not all subsequent packets.


• 3.5: timeout and RTT, timeout and fast retransmit, flow control

  1. Timeout and Round-Trip Time (RTT): Timeout intervals are adjusted based on the estimated RTT, which is the time taken for a signal to go to the destination and back.Estimating RTT: TCP estimates the RTT by measuring the time it takes for a segment to be sent and an acknowledgment to be received for that segment (SampleRTT). This value fluctuates due to network conditions.Calculating EstimatedRTT: TCP uses an Exponential Weighted Moving Average (EWMA) to calculate a smoothed RTT estimate (EstimatedRTT), which puts more weight on recent samples.Calculating Timeout: The retransmission timeout interval is based on the EstimatedRTT and the variability in RTT (DevRTT). The formula used is typically TimeoutInterval = EstimatedRTT + 4 * DevRTT. This ensures that timeout is adaptive to network conditions.

  2. Timeout and Fast Retransmit: Involves retransmitting a packet before its timeout if multiple duplicate acknowledgments are received, indicating probable packet loss.Detecting Loss via Duplicate ACKs: TCP uses duplicate acknowledgments (ACKs) as a hint of segment loss. If the sender receives three duplicate ACKs for the same data, it assumes that the segment following the acknowledged segment has been lost.Fast Retransmit Mechanism: Upon receiving three duplicate ACKs, TCP performs a fast retransmit, retransmitting the missing segment before the timeout for that segment expires.

  3. Flow Control: Ensures that the sender doesn't overwhelm the receiver by adjusting the rate of data transmission based on the receiver's buffer capacity.Receiver's Buffer Space (rwnd): TCP provides flow control by adjusting the rate at which the sender sends data to avoid overflowing the receiver’s buffer. The receiver informs the sender of the available buffer space (receive window, rwnd) in every ACK it sends.Sender's Control: The sender ensures that the amount of data sent but not yet acknowledged does not exceed the receiver’s advertised window, thus adapting its send rate to the receiver’s current buffer capacity.Dynamic Window Sizing: The size of the receiver’s window is dynamic and adjusts based on the rate at which the application reads data from the buffer, allowing for efficient use of network resources and avoiding buffer overflow.


• 3.6: congestion control principles and approaches

  • Causes and Costs of Congestion: Congestion occurs when too many sources send data at high rates, leading to an overflow of router buffers. This results in not only packet loss but also inefficient resource utilization and poor end-system performance.
  • Scenario Analyses: Various scenarios are examined, ranging from simple cases with two senders and infinite buffer routers to more complex scenarios involving multihop paths and finite buffers. These scenarios illustrate how congestion impacts throughput, delay, and leads to packet retransmission and loss.
  • Approaches to Congestion Control: Two primary approaches are identified:
    • End-to-End Congestion Control: This involves no direct feedback from the network. The transport layer infers congestion indirectly (e.g., via packet loss and delay).
    • Network-Assisted Congestion Control: This includes explicit feedback from the network layer to the transport layer, such as congestion indication in packets.


• 3.7: TCP congestion control: principles of congestion estimation (lost segment, acknowledgment, probing),
slow start, congestion avoidance, fast recovery. TCP Reno

ACK:

Congestion control:

  1. Slow Start: TCP begins transmission at a low rate and exponentially increases the congestion window (cwnd) until it detects congestion (via a lost segment).
  2. Congestion Avoidance: Once the slow start threshold is reached, TCP transitions to a more conservative mode, increasing the cwnd linearly to probe for available bandwidth.
  3. Fast Recovery: After detecting packet loss (through duplicate ACKs), TCP reduces its cwnd and enters this phase, attempting to recover quickly from lost packets.
  4. TCP Reno: An implementation of TCP that incorporates fast recovery and fast retransmit mechanisms, enhancing performance over its predecessor, TCP Tahoe.
Fairness and Efficiency
  • TCP's congestion control aims for fairness (equal bandwidth distribution) and efficiency (maximizing utilization without causing congestion).
  • Fairness is challenged in scenarios with multiple connections and varying RTTs.

2
VI. NETWORK LAYER
Kurose:
• 2.5 - DNS: services, overview, hierarchical architecture.

DNS Services
  • Primary Function: DNS (Domain Name System) translates human-friendly domain names (e.g., www.google.com) into IP addresses (e.g., 192.168.1.1) that computers use to identify each other on the network.
  • Host Aliasing: DNS allows hosts with complex names to have simpler, memorable aliases.
  • Mail Server Aliasing: Simplifies e-mail addresses by translating easy-to-remember email addresses into canonical mail server addresses.
  • Load Distribution: Distributes network traffic among replicated servers (like Web servers) by rotating the IP addresses in DNS responses.
DNS Overview
  • DNS Querying Process: Involves a host requesting the DNS to resolve a domain name into an IP address. This process can include interaction with multiple DNS servers at different levels.
  • Caching Mechanism: DNS servers cache address mappings to reduce response time and network traffic. Cached entries have a Time-to-Live (TTL) after which they expire.
Hierarchical Architecture
  • Structure: DNS is organized in a hierarchical structure with different levels of DNS servers.
    • Root DNS Servers: The top of the DNS hierarchy, managing the global database of domain names and IP addresses.
    • Top-Level Domain (TLD) Servers: Manage top-level domains like .com, .net, .org, and country-specific domains (.uk, .fr, etc.).
    • Authoritative DNS Servers: Hold specific information about domains including individual hosts.
    • Local DNS Servers: Provided by ISPs, they act as the first point of contact for clients in the DNS querying process.


• 5.4.1 - Link layer addressing: MAC addresses.

Purpose of Link Layer Addresses
  • Necessity: Devices like hosts and routers have both network-layer and link-layer addresses. Network-layer addresses (like IP addresses) are used for communication between hosts across different networks, while link-layer addresses are used for communication within the same network segment or LAN.
  • Distinct Roles: Network-layer addresses are hierarchical and change based on network location. In contrast, link-layer addresses (MAC addresses) are flat, unique, and don't change regardless of location.
MAC Addresses

Sending Datagrams within and between Subnets

Characteristics and Importance

Overall, MAC addresses and ARP play a crucial role in local network

  • Association: MAC addresses are associated with network adapters (not directly with the host or router). A device with multiple network interfaces will have multiple MAC addresses.
  • Length and Notation: Typically 6 bytes long, expressed in hexadecimal format.
  • Uniqueness: Each MAC address is globally unique, managed by IEEE. No two adapters have the same MAC address.
  • Flexibility: While designed to be permanent, modern systems allow MAC addresses to be changed via software.
  • Address Resolution Protocol (ARP)
  • Function: ARP translates network-layer addresses (IP addresses) into link-layer addresses (MAC addresses).
  • Operation: Each device maintains an ARP table mapping IP addresses to corresponding MAC addresses. When needing to send a frame to a specific IP address, the device checks its ARP table; if the address isn’t there, it broadcasts an ARP query to acquire the corresponding MAC address.
  • Scope: ARP operates only within a single network segment or LAN.
  • Within the Same Subnet: The sending device uses ARP to find the MAC address of the destination device and sends the frame directly.
  • Across Different Subnets: The sending device uses ARP to find the MAC address of the gateway (router) for the destination subnet. The router then forwards the packet to the destination, possibly using ARP again to find the destination's MAC address within its subnet.
  • Layer Independence: MAC addresses allow for independence between network and link layers, supporting multiple network-layer protocols.
  • Efficiency: Using MAC addresses at the link layer reduces the need for network-layer processing for local traffic.
  • Plug-and-Play: ARP tables are built automatically without manual configuration, facilitating easy network setup and management.


• 4.2 - transport layer vs network layer services, datagram networks (forwarding tables, forwarding rules).

4.2 - Transport Layer vs Network Layer Services, Datagram Networks
  • Transport Layer vs Network Layer:

    • The transport layer offers connectionless or connection-oriented service between two processes. UDP is an example of connectionless service, while TCP represents connection-oriented service.
    • The network layer parallels this with host-to-host services, either connectionless or connection service.
    • Differences include the scope (host-to-host vs process-to-process) and the implementation in network architectures (e.g., Internet, ATM, frame relay).
  • Virtual Circuit and Datagram Networks:

    • Datagram networks use forwarding tables and rules for packet transmission without prior setup (e.g., the Internet).
    • Virtual-circuit networks require setup and maintain virtual circuits with specific VC numbers and forwarding tables (e.g., ATM, frame relay).


• 4.3 - Routers: architecture and components, input processing, switching, output processing, queueing.

  • Router Components:

    • Input Processing: Deals with physical and link layer functions, and determines the output port using a forwarding table.
    • Switching: The core function where packets are switched from input to output ports. Implemented via memory, a bus, or an interconnection network.
    • Output Processing: Involves selecting packets from the queue for transmission over the outgoing link.
  • Queueing in Routers:

    • Packets may queue at both input and output ports depending on traffic load and router configuration.
    • Queueing at output ports is significant for understanding packet loss within a network.
    • Queueing strategies and buffer management are crucial for efficient and fair packet handling.


• 4.4 - ICMP (overview).

  1. ICMP Overview: ICMP, defined in RFC 792, is used by hosts and routers for communicating network-layer information, primarily for error reporting. It's architecturally above IP but is carried inside IP datagrams.

  2. Error Reporting: A common example of ICMP usage is the “Destination network unreachable” error message, typically generated by routers when they can't find a path to the specified host.

  3. ICMP Structure: ICMP messages include a type and code field, and carry the header and the first 8 bytes of the IP datagram that triggered the ICMP message. This structure helps the sender identify the datagram causing the error.

  4. Ping Program: ICMP is utilized in the ping program, where an ICMP type 8 code 0 (echo request) is sent to a host, which replies with a type 0 code 0 (echo reply). This process is supported directly by the TCP/IP operating system.

  5. Source Quench Message: This message, used for congestion control, is not commonly used in modern networks. It was designed to let a congested router inform a host to reduce its transmission rate.

  6. Traceroute Program: Traceroute uses ICMP to trace routes from a host to any other host. It sends IP datagrams with increasing TTL (Time To Live) values. Routers along the path respond with ICMP messages (type 11 code 0) providing their names and IP addresses. The destination host eventually responds with a "port unreachable" message (type 3 code 3), indicating the traceroute process's completion.

  7. ICMP Message Types: Various ICMP message types include echo reply, destination unreachable, source quench, echo request, router advertisement, TTL expired, and IP header bad.

  8. Traceroute Mechanism: The Traceroute program needs to instruct the operating system to generate specific UDP datagrams and be able to receive ICMP messages. The round-trip time and router identities are obtained through this process.


• 4.5 - graph model, shortest path, global vs decentralized routing algorithms, Link State protocol, Distance Vector, comparison LS vs DV, Hierarchical routing.

  1. Graph Model in Routing: Networks are modeled as graphs, where nodes represent routers and edges represent physical links. Edges have associated costs, reflecting factors like link length, speed, or monetary cost. Routing aims to find the least costly paths between nodes.

  2. Shortest Path Problem: The goal is to find paths with the minimum sum of edge costs. This could mean the least number of hops or other criteria, depending on edge costs.

  3. Global vs. Decentralized Routing Algorithms:

    • Global Routing Algorithms: These algorithms use complete, global knowledge of the network to compute paths. An example is the Link State (LS) algorithm.
    • Decentralized Routing Algorithms: These operate iteratively and distribute the routing process across nodes. Each node starts with knowledge of its directly attached links, and information is gradually exchanged. The Distance Vector (DV) algorithm is an example.
  4. Link State (LS) Routing Algorithm:

    • LS algorithms require full knowledge of the network's topology and link costs.
    • A common LS algorithm is Dijkstra's algorithm, which iteratively computes the shortest paths from a source node to all other nodes.
    • It has O(n^2) complexity and is robust against incorrect routing information from individual nodes.
  5. Distance Vector (DV) Routing Algorithm:

    • DV algorithms are decentralized, with each node maintaining a vector (list) of estimates to all other nodes.
    • Based on the Bellman-Ford equation, it iteratively updates paths until no changes occur.
    • DV can be slower to converge and susceptible to routing loops and the count-to-infinity problem.
  6. DV Algorithm:

    • Operation: In DV, each node communicates only with its direct neighbors. It provides these neighbors with its least-cost estimates to all nodes it knows about in the network.
    • Message Complexity: DV requires message exchanges between directly connected neighbors at each iteration. The convergence time can vary based on several factors. When link costs change, the algorithm propagates these changes only if they result in a changed least-cost path.
    • Convergence: The DV algorithm may converge slowly and can experience routing loops and the count-to-infinity problem during convergence.
    • Robustness: DV's robustness is lower since a node can advertise incorrect least-cost paths to any or all destinations. Incorrect calculations by one node can diffuse through the entire network.
  7. LS Algorithm:

    • Operation: In LS, each node communicates with all other nodes (typically via broadcast) but only shares the costs of its directly connected links.
    • Message Complexity: LS requires O(∣N∣∣E∣) messages for each node to learn the cost of each link in the network. When a link cost changes, this new cost must be broadcast to all nodes.
    • Convergence: LS is an O(∣N∣2) algorithm and also requires O(∣N∣∣E∣) messages. It generally converges faster than DV.
    • Robustness: LS is more robust because each node computes its own forwarding tables independently. If a router fails or provides incorrect information, it mainly affects its own calculations, not the entire network.


• 4.6 - intra-AS routing, RIP and OSPF (overview), inter-AS routing, BGP basics, BGP route selection, routing policy, why inter and intra-AS routing.

4.6 - Routing in the Internet

Intra-AS Routing
  1. Routing Information Protocol (RIP):

    • An early intra-AS protocol using a distance-vector approach.
    • Utilizes hop count as a metric, with a maximum limit of 15 hops.
    • RIP updates are exchanged between neighbors approximately every 30 seconds.
    • RIP version 2 supports route aggregation.
  2. Open Shortest Path First (OSPF):

    • A more advanced link-state protocol than RIP.
    • Constructs a complete topological map and utilizes Dijkstra's algorithm.
    • Allows for multiple equal-cost paths, integrated unicast and multicast routing, and hierarchical structuring into areas.
    • OSPF broadcasts link-state information and uses a HELLO message to check link operability.
Inter-AS Routing
  1. Border Gateway Protocol (BGP):
    • The de facto standard for inter-AS routing.
    • Provides a way to exchange reachability information and routing paths based on AS policies.
    • Utilizes attributes like AS-PATH and NEXT-HOP in route advertisements.
    • Employs policies for route selection, filtering, and propagation.
    • BGP has complexities around policy, scalability, and performance.


• 4.7 - Broadcast routing, uncontrolled and controlled flooding, spanning tree, multicast routing, multicast trees

4.7 - Broadcast and Multicast Routing

Broadcast Routing Algorithms
  1. Uncontrolled Flooding:

    • Simple but inefficient; causes broadcast storms in networks with cycles.
  2. Controlled Flooding:

    • Sequence-number-controlled flooding: Nodes track received broadcast packets to avoid duplication.
    • Reverse Path Forwarding (RPF): Nodes forward packets only if they arrive on the shortest path back to the source.
  3. Spanning-Tree Broadcast:

    • Constructs a spanning tree covering all nodes; eliminates redundant packets.
    • Center-based approach: Nodes join by sending tree-join messages towards a designated center node.
Multicast Routing
  1. Internet Group Management Protocol (IGMP):

    • Manages host-router multicast group memberships.
    • Hosts inform their local router of group membership through IGMP messages.
  2. Multicast Routing Protocols:

    • Protocols like DVMRP and PIM (both dense and sparse mode) are used.
    • PIM-Sparse mode uses rendezvous points, and Source-Specific Multicast (SSM) focuses on a single sender.
    • Multiprotocol BGP extensions support multicast routing information.
  3. Application vs. Network Layer Multicast:

    • There's a debate about whether multicast should be handled at the network or application layer.
    • Internet multicast has yet to see widespread adoption, but application-layer multicast in systems like PPLive is gaining traction.

VII. LAN
Kurose:
• 5.1 - services, adapters and interfaces.

5.1: Services, Adapters, and Interfaces
  • Definition: Nodes (hosts, routers, switches, WiFi access points) connect via communication channels (links).
  • Datagram Transfer: Requires encapsulation in a link-layer frame for transmission over links.
  • Transportation Analogy: Compares the process to a travel agent planning a trip involving various transportation modes, each representing a different link-layer protocol.


• 5.2 - error detection and correction principles, checksum and CRC

5.2: Error Detection and Correction Principles, Checksum, and CRC
  • Reliable Delivery: Some link-layer protocols ensure error-free delivery across links.
  • Error Detection and Correction: Detecting and correcting bit errors introduced by signal attenuation and electromagnetic noise. Common methods include checksum and Cyclic Redundancy Check (CRC).
  • Checksum: Used for error detection in transport and network layers.
  • CRC: A sophisticated method used in link layers for error detection and correction.


• 5.3 - MAC protocols: taking turns and random access protocols, TDMA and FDMA, slotted aloha, aloha,
CSMA, carrier sensing, collision detection.

5.3: MAC Protocols: Taking Turns and Random Access Protocols, TDMA and FDMA, Slotted Aloha, Aloha, CSMA, Carrier Sensing, Collision Detection
  • MAC Protocols: Coordinate frame transmissions from multiple nodes on a single broadcast link.
  • Types:
    • Channel Partitioning Protocols (TDM, FDM, CDMA): Divide channel into smaller pieces (time, frequency, code).
    • Random Access Protocols (ALOHA, Slotted ALOHA, CSMA): Nodes transmit at full rate, using probabilistic retransmission strategies upon collisions.
    • Taking-Turns Protocols (Polling, Token Passing): Nodes take turns in transmitting to avoid collisions.


• 5.4 - switched local area networks: MAC addressing and ARP, ethernet (hub and switches), link layer switches,and switches vs routers.

5.4.1 Link-Layer Addressing and ARP
  • MAC Addresses: Unique, flat-structure addresses assigned to network adapters, essential for frame delivery within a LAN.
  • Address Resolution Protocol (ARP): Translates IP addresses to MAC addresses, allowing communication within a LAN.
5.4.2 Ethernet
  • Evolution of Ethernet: From original coaxial cable systems to modern switched Ethernet using star topology.
  • Frame Structure: Ethernet frames encapsulate data packets, with fields for source/destination MAC addresses, type, and error checking (CRC).
  • Technologies: Variants like 10BASE-T, 100BASE-T, Gigabit Ethernet, each with different speeds and physical media.
5.4.3 Link-Layer Switches
  • Functionality: Forward frames based on MAC addresses, using a switch table to direct traffic efficiently.
  • Self-Learning: Switches autonomously build and update their switch table based on observed traffic, eliminating the need for manual configuration.
  • Advantages of Switches:
    • Eliminate collisions found in hub-based networks.
    • Allow heterogeneous link types in the same network.
    • Enable easier network management and troubleshooting.
5.4.4 Virtual Local Area Networks (VLANs)
  • Concept: Create isolated networks over the same physical infrastructure, enhancing security and traffic management.
  • Port-based VLANs: Group switch ports into separate broadcast domains, each acting as an independent network.
  • Trunking: Connects VLANs across multiple switches, using tagging protocols like 802.1Q to identify VLAN membership of frames.
  • Flexibility and Management: VLANs facilitate easier management of network resources and user mobility within a network.

Switches VS Routers

Switches

  1. Layer of Operation: Switches primarily operate at the Data Link Layer (Layer 2) of the OSI model, though some advanced switches can operate at the Network Layer (Layer 3).

  2. Function: The primary function of a switch is to connect devices within a single local area network (LAN). It uses MAC addresses to forward data to the correct destination within the LAN.

  3. Data Handling: Switches handle frames. When a frame arrives, the switch reads the destination MAC address and forwards the frame to the appropriate port.

  4. Network Segmentation: Switches are often used to segment a network into smaller, more efficient subnetworks. This segmentation reduces unnecessary network traffic and increases performance.

  5. Types: There are unmanaged switches, which are plug-and-play, and managed switches, which offer more control over network traffic and provide additional features like VLANs (Virtual Local Area Networks).

Routers

  1. Layer of Operation: Routers operate at the Network Layer (Layer 3) of the OSI model.

  2. Function: The primary function of a router is to connect multiple networks together, such as connecting a home network to the Internet. Routers use IP addresses to determine the best path for forwarding data packets.

  3. Data Handling: Routers handle packets. They examine the destination IP address in a packet and use routing tables to determine the best path for the packet to travel.

  4. Network Boundary: Routers act as a boundary between different networks, managing traffic between these networks and often performing network address translation (NAT).

  5. Advanced Features: Routers often come with advanced features like firewalls, virtual private network (VPN) support, and Quality of Service (QoS) settings.

Key Differences

  • Scope of Operation: Switches are used within a single network (like a home or office LAN), while routers are used to connect different networks (e.g., a LAN with the Internet).
  • Addressing Method: Switches use MAC addresses for local frame forwarding, whereas routers use IP addresses for making decisions about packet forwarding across different networks.
  • Functionality: Switches create a single broadcast domain and multiple collision domains, whereas routers create multiple broadcast and collision domains.
  • Performance and Complexity: Routers are generally more complex than switches and are responsible for more complex network tasks like route determination and data packet forwarding across diverse networks.


• 6.3 - architecture, 802.11 MAC protocol, RTS and CTS

6.3.1 The 802.11 Architecture
  • Basic Service Set (BSS): Consists of wireless stations and an Access Point (AP). Supports 'infrastructure mode' (with APs) and 'ad hoc mode' (peer-to-peer without APs).
  • MAC Addresses: Both wireless stations and APs have unique MAC addresses.
Channels and Association
  • SSID and Channels: APs are assigned SSIDs and operate on specific channels within the 2.4-2.485 GHz band. Non-overlapping channels minimize interference.
  • Association Process: Wireless stations use passive or active scanning to detect APs and associate with an AP for network access.
  • 6.3.2 The 802.11 MAC Protocol
  • CSMA/CA (Carrier Sense Multiple Access/Collision Avoidance): Used to avoid collisions, unlike Ethernet's CSMA/CD (Collision Detection).
  • Link-Layer Acknowledgments: Ensures reliable frame delivery in the presence of high error rates in wireless channels.
  • RTS/CTS Mechanism: Request to Send/Clear to Send process mitigates the hidden terminal problem, reserving channel for communication.
  • 20
    点赞
  • 16
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
这是一个定义了神经网络的类,下面是一个简单的例子: ```python class NeuralNetwork: def __init__(self, input_nodes, hidden_nodes, output_nodes): # 初始化输入层、隐藏层和输出层的节点数 self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # 初始化权重矩阵,使用随机值填充 self.weights_input_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5, (self.hidden_nodes, self.input_nodes)) self.weights_hidden_output = np.random.normal(0.0, self.output_nodes**-0.5, (self.output_nodes, self.hidden_nodes)) # 定义激活函数 self.activation_function = lambda x: 1/(1+np.exp(-x)) def train(self, inputs_list, targets_list): # 将输入和目标值转换为二维数组 inputs = np.array(inputs_list, ndmin=2).T targets = np.array(targets_list, ndmin=2).T # 计算隐藏层的输入信号 hidden_inputs = np.dot(self.weights_input_hidden, inputs) # 计算隐藏层的输出信号 hidden_outputs = self.activation_function(hidden_inputs) # 计算输出层的输入信号 final_inputs = np.dot(self.weights_hidden_output, hidden_outputs) # 计算输出层的输出信号 final_outputs = self.activation_function(final_inputs) # 计算输出层误差 output_errors = targets - final_outputs # 计算隐藏层误差 hidden_errors = np.dot(self.weights_hidden_output.T, output_errors) * hidden_outputs * (1 - hidden_outputs) # 更新权重 self.weights_hidden_output += self.learning_rate * np.dot((output_errors * final_outputs * (1 - final_outputs)), hidden_outputs.T) self.weights_input_hidden += self.learning_rate * np.dot((hidden_errors * hidden_outputs * (1 - hidden_outputs)), inputs.T) def query(self, inputs_list): # 将输入转换为二维数组 inputs = np.array(inputs_list, ndmin=2).T # 计算隐藏层的输入信号 hidden_inputs = np.dot(self.weights_input_hidden, inputs) # 计算隐藏层的输出信号 hidden_outputs = self.activation_function(hidden_inputs) # 计算输出层的输入信号 final_inputs = np.dot(self.weights_hidden_output, hidden_outputs) # 计算输出层的输出信号 final_outputs = self.activation_function(final_inputs) return final_outputs ``` 这个类实现了一个简单的三层神经网络,包括一个输入层、一个隐藏层和一个输出层。它包括以下方法: - `__init__(self, input_nodes, hidden_nodes, output_nodes)`:初始化神经网络,包括输入层、隐藏层和输出层的节点数,以及权重矩阵和激活函数。 - `train(self, inputs_list, targets_list)`:训练神经网络,更新权重矩阵。 - `query(self, inputs_list)`:使用训练好的神经网络进行预测,输出一个结果。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值