Transport Layer

by E-Computer Concepts May 21, 2020 at 9:00 am

In computer networking, a transport layer provides end-to-end or host-to-host communication services for applications within a layered architecture of network components and protocols. The transport layer provides services such as connection-oriented data stream support, reliability, flow control, and multiplexing.

Transport layer implementations are contained in both the TCP/IP model (RFC 1122), which is the foundation of the Internet, and the Open Systems Interconnection (OSI) model of general networking, however, the definitions of details of the transport layer are different in these models. In the Open Systems Interconnection model the transport layer is most often referred to as Layer 4 or L4.

Transport Layer

Transport Layer Services

Connection-Oriented Communication

It is normally easier for an application to interpret a connection as a data stream rather than having to deal with the underlying connectionless models, such as the datagram model of the User Datagram Protocol (UDP) and of the Internet Protocol (IP).

Same Order Delivery

The network layer doesn’t generally guarantee that packets of data will arrive in the same order that they were sent, but often this is a desirable feature. This is usually done through the use of segment numbering, with the receiver passing them to the application in order. This can cause head-of-line blocking.


Packets may be lost during transport due to network congestion and errors. By means of an error detection code, such as a checksum, the transport protocol may check that the data is not corrupted, and verify correct receipt by sending an ACK or NACK message to the sender. Automatic repeat request schemes may be used to re-transmit lost or corrupted data.

Flow control

The rate of data transmission between two nodes must sometimes be managed to prevent a fast sender from transmitting more data than can be supported by the receiving data buffer, causing a buffer overrun. This can also be used to improve efficiency by reducing buffer underrun.

Congestion Avoidance

Congestion control can control traffic entry into a telecommunications network, so as to avoid congestive collapse by attempting to avoid over subscription of any of the processing or link capabilities of the intermediate nodes and networks and taking resource reducing steps, such as reducing the rate of sending packets.

For example, automatic repeat requests may keep the network in a congested state; this situation can be avoided by adding congestion avoidance to the flow control, including slow-start. This keeps the bandwidth consumption at a low level in the beginning of the transmission, or after packet re-transmission.


Ports can provide multiple endpoints on a single node. For example, the name on a postal address is a kind of multiplexing, and distinguishes between different recipients of the same location. Computer applications will each listen for information on their own ports, which enables the use of more than one network service at the same time. It is part of the transport layer in the TCP/IP model, but of the session layer in the OSI model.

Transport Layer

A transport layer protocol provides for logical communication between application processes running on different hosts. By “logical” communication, we mean that although the communicating application processes are not physically connected to each other (indeed, they may be on different sides of the planet, connected via numerous routers and a wide range of link types), from the applications’ viewpoint, it is as if they were physically connected.

Application processes use the logical communication provided by the transport layer to send messages to each other, free for the worry of the details of the physical infrastructure used to carry these messages.

At the sending side, the transport layer converts the messages it receives from a sending application process into 4-PDUs (that is, transport-layer protocol data units). This is done by (possibly) breaking the application messages into smaller chunks and adding a transport-layer header to each chunk to create 4-PDUs.

The transport layer then passes the 4-PDUs to the network layer, where each 4-PDU is encapsulated into a 3-PDU. At the receiving side, the transport layer receives the 4-PDUs from the network layer, removes the transport header from the 4-PDUs, reassembles the messages and passes them to a receiving application process.

A computer network can make more than one transport layer protocol available to network applications. For example, the Internet has two protocols — TCP and UDP. Each of these protocols provides a different set of transport layer services to the invoking application.

Transport Layer

Transport Layer Protocols

User Datagram Protocol (UDP)

The User Datagram Protocol (UDP) is one of the core members of the Internet protocol suite. The protocol was designed by David P. Reed in 1980 and formally defined in RFC 768.

UDP uses a simple connectionless transmission model with a minimum of protocol mechanism. It has no handshaking dialogues, and thus exposes any unreliability of the underlying network protocol to the user’s program. There is no guarantee of delivery, ordering, or duplicate protection. UDP provides check-sums for data integrity, and port numbers for addressing different functions at the source and destination of the datagram.

With UDP, computer applications can send messages, in this case referred to as datagram, to other hosts on an Internet Protocol (IP) network without prior communications to set up special transmission channels or data paths.

UDP is suitable for purposes where error checking and correction is either not necessary or is performed in the application, avoiding the overhead of such processing at the network interface level. Time-sensitive applications often use UDP because dropping packets is preferable to waiting for delayed packets, which may not be an option in a real-time system.

If error correction facilities are needed at the network interface level, an application may use the Transmission Control Protocol (TCP) or Stream Control Transmission Protocol (SCTP)

UDP (User Datagram Protocol) is an alternative communications protocol to Transmission Control Protocol (TCP) used primarily for establishing low-latency and loss tolerating connections between applications on the Internet. Both UDP and TCP run on top of the Internet Protocol (IP) and are sometimes referred to as UDP/IP or TCP/IP. Both protocols send short packets of data, called datagram.


  • It is transaction-oriented, suitable for simple query-response protocols such as the Domain Name System or the Network Time Protocol.
  • It provides datagram, suitable for modeling other protocols such as in IP tunneling or Remote Procedure Call and the Network File System.
  • It is simple, suitable for bootstrapping or other purposes without a full protocol stack, such as the DHCP and Trivial File Transfer Protocol.
  • It is stateless, suitable for very large numbers of clients, such as in streaming media applications for example IPTV.
  • The lack of re-transmission delays makes it suitable for real-time applications such as Voice over IP, online games, and many protocols built on top of the Real Time Streaming Protocol.
  • Works well in unidirectional communication, suitable for broadcast information such as in many kinds of service discovery and shared information such as broadcast time or Routing Information Protocol

Reliable Byte Stream (TCP)

A reliable byte stream is a common service paradigm in computer networking; it refers to a byte stream in which the bytes which emerge from the communication channel at the recipient are exactly the same, and in exactly the same order, as they were when the sender inserted them into the channel.

Connection-Oriented (TCP)

  • Flow Control : keep sender from overrunning receiver.
  • • Congestion control: keep sender from overrunning network.

Characteristics of TCP Reliable Delivery

TCP provides a reliable, byte-stream, full-duplex inter-process communications service to application programs/processes. The service is connection-oriented and uses the concept of port numbers to identify processes.


Two process which desire to communicate using TCP must first request a connection. A connection is closed when communication is no longer desired.


An application which uses the TCP service is unaware of the fact that data is broken into segments for transmission over the network.


Once a TCP connection is established, application data can flow in both directions simultaneously — note, however, that many application protocols do not take advantage of this.

Port Numbers

Port numbers identify processes/connections in TCP.

Edge Systems and Reliable Transport

  1. An edge system is any computer (host, printer, even a toaster…) which is “connected to” the Internet — that is, it has access to the Internet’s packet delivery system, but doesn’t itself form part of that delivery system.
  2. A transport service provides communications between application processes running on edge systems. As we have already seen, application processes communicate with each another using application protocols such as HTTP and SMTP. The interface between an application process and the transport service is normally provided using the socket mechanism.

TCP Segments

TCP slices the incoming byte-stream data into segments for transmission across the Internet. A segment is a highly-structured data package consisting of an administrative header and some application data.

TCP Segment

Source and Destination Port Numbers

We have already seen that TCP server processes wait for connections at a pre-agreed port number. At connection establishment time, TCP first allocates a client port number — a port number by which the client, or initiating, process can be identified. Each segment contains both port numbers.

Segment and Acknowledgment Numbers

Every transmitted segment is identified with a 32-bit Sequence number, so that it can be explicitly acknowledged by the recipient. The Acknowledgment Number identifies the last segment received by the originator of this segment.

Application Data

Optional because some segments convey only control information — for example, an ACK segment has a valid acknowledgment number field, but no data. The data field can be any size up to the currently configured MSS for the whole segment.

TCP Operation

When a segment is received correct and intact at its destination, an acknowledgment (ACK) segment is returned to the sending TCP. This ACK contains the sequence number of the last byte correctly received, incremented by 1. ACKs are cumulative — a single ACK can be sent for several segments if, for example, they all arrive within a short period of time.

The network service can fail to deliver a segment. If the sending TCP waits for too long for an acknowledgment, it times out and resend the segment, on the assumption that the datagram has been lost.

In addition, the network can potentially deliver duplicated segments, and can deliver segments out of order. TCP buffers or discards out of order or duplicated segments appropriately, using the byte count for identification.


TCP Connections

An application process requests TCP to establish, or open, a (reliable) connection to a server process running on a specified edge-system, and awaiting connections at a known port number. After allocating an unused client-side port number[5], TCP initiates an exchange of connection establishment “control segments”:

  • This exchange of segments is called a 3-way handshake, and is necessary because any one of the three segments can be lost, etc. The ACK and SYN segment names refer to “control bits” in the TCP header: for example, if the ACK bit is set, then this is an ACK segment.
  • Each TCP chooses an random initial sequence number (the x and y in this example). This is crucial to the protocol’s operation if there’s a small chance that “old” segments (from a closed connection) might be interpreted as valid within the current connection.
  • A connection is closed by another 3-way handshake of control segments. It’s possible for a connection to be half open if one end requests close, and the other doesn’t respond with an appropriate segment.

Add Comment