[gtranslate]

network-protocols

Research Project on Network Protocols

network protocols

In order to send information from one computer to another, standard methods of information transfer and processing are necessary. These methods are labeled as computer protocols. As for the definition, a protocol is a ‘formal description of message formats and the rules two computers must follow to exchange those messages.

Protocols can describe low-level details of machine-to-machine interfaces (e.g., the order in which bits and bytes are sent across a wire) or high-level exchanges between allocation programs (e.g., the way in which two programs transfer a file across the Internet)’.

The emergence of new computer protocols has marked landmark developments in the history of Internet. Over time, protocols became more and more sophisticated, allowing the network to operate quicker and more efficiently. New protocols provided the conditions for virtually errorless data transfer at high rate.

Each stage of the Internet development can be associated with a particular protocol which dominated the network for the given period in history. ARPANET was based on a Host-to-Host protocol called Network Control Protocol (NCP), which was implemented in early 1970s.

Later in the decade Transmission Control Protocol/Internet Protocol (TCP/IP) was introduced. The arrival of World Wide Web in early 1990s led to the development of Hypertext Transfer Protocol (HTTP). The nature of these three protocols will be discussed in the following chapters.

Network Control Protocol (NCP)

Network Control Protocol (NCP) was a host-to-host protocol for ARPANET developed in 1970s. At that time, lower protocol layers were provided by Interface Message Processors (IMPs), while NCP provided the transport layer consisting of ARPANET Host-to-Host Protocol (AHHP) and the Initial Connection Protocol (ICP).

Here it’s necessary to make a short excursus into the difference between higher and lower level protocols. Protocols are structured to form a layered design also known as protocol stack, and ‘there is a distinction between the functions of the lower (network) layers, which are primarily designed to provide a connection or path between users to hide details of underlying communications facilities, and the upper (or higher) layers, which ensure data exchanged are in correct and understandable form.’

NCP provided information exchange within ARPANET. However, there were serious concerns about the efficiency and reliability of data transfer:

‘NCP relied on ARPANET to provide end-to-end reliability. If any packets were lost, the protocol (and presumably any applications it supported) would come to a grinding halt. In this model NCP had no end-end host error control, since the ARPANET was to be the only network in existence and it would be so reliable that no error control would be required on the part of the hosts.’

There were several major drawbacks in this protocol. Scientist actively looked for the ways to amend the existing problems and enhance the way information was transmitted in the network.

Transmission Control Protocol/Internet Protocol (TCP/IP)

While Network Control Protocol (NCP) showed that the emerging network had great potential, several drawbacks in the protocol impaired its development. Robert E. Kahn who worked for the Defense Advanced Research Projects Agency (DARPA) and is believed to be one of the inventors of TCP/IP laid down the several important principles for the functioning of the global network.

First of all, every separate network should have been able to connect to the Internet with any internal changes. Secondly, the packets that failed to reach the final destination should have been retransmitted from the source within a reasonable period of time. Algorithms were designed to prevent lost packets from permanently disabling communications.

What later became known as gateways and routers were used to connect the networks. They had to operate in a way that no information about the individual flows of packets passing through them was stored by the gateways. It enabled them to stay simple, and there was no need for complicated adaptation and recovery from a network failure.

There also was a need to provide host to host ‘pipelining’ so that multiple packets could be sent from source to destination as the hosts decided it, if the intermediate networks could do it. Gateway functions were needed to allow it to forward packets appropriately; such functions encompassed interpreting IP headers for routing, handling interfaces, breaking packets into smaller pieces if necessary and other.

There was a demand for global addressing, techniques for host to host flow control, and end-end checksums which were able to reassemble packets from fragments and detect duplicates.

The network should have been able to interface with different operating systems. In addition, one of the foundational principles of Internet was set forth at those times: the network was characterized by the absence of global control at the operations level.

Taking into account all the abovementioned, Robert E. Kahn worked out a protocol called the Transmission Control Protocol (TCP). Under this protocol, every packet of information sent through the network was assigned a sequence number.

This number was a guarantee that the information would be reassembled in the correct order by the receiving computer. Sequence numbers also ensured that no information would be lost in the process of transmission.

When the receiving computer successfully got the packet, it sent back a special packet, called an acknowledgement. In addition, packets were sent along with checksums calculated by the sending computer, and checked by the destination computer, to ensure that it was not damaged in any way en route and no information was lost.

As a result, the protocol provided the basis for a more reliable and effective network. No computer in the network was the single point of failure or able to control the whole system.

Transmission Control Protocol (TCP) was complemented by lower layer protocol, the Internet Protocol (IP), developed by Vinton Gray Cerf. Information from an upper layer protocol is encapsulated inside packets and sent according to IP procedures. IP can be used over any heterogeneous network.

In fact, the main functions of IP are associated with addressing and routing. All host computers in the network are assigned IP addresses, and all hosts perform routing of information packets.

Hypertext Transfer Protocol (HTTP)

In early 1990s the Hypertext Transfer Protocol (HTTP) was suggested by the Network Working Group consisting of Roy T. Fielding, Nielsen Frystyk and Tim Berners-Lee. The developers of this protocol themselves defined it as ‘an application-level protocol with the lightness and speed necessary for distributed, collaborative, hypermedia information systems.

It is a generic, stateless, object-oriented protocol which can be used for many tasks, such as name servers and distributed object management systems, through extension of its request methods (commands).’

The reason for that was the need to create a protocol able to operate in the World Wide Web environment and retrieve HTML pages. As for the basic rules and procedures, ‘HTTP operates over TCP connections, usually to port 80, though this can be overridden and another port used.

After a successful connection, the client transmits a request message to the server, which sends a reply message back. The simplest HTTP message is ‘GET url’, to which the server replies by sending the named document.’

Then it’s possible to send additional header fields, one per line, terminating the message with a blank link. The server replies accordingly, first with a series of header lines, then a blank line, then the document itself. Full headers should be used because the first line of a server header features a response code indicating the success or failure of the transmission.

Header fields may include the following information: document type and how it should be interpreted (Content-type), new location if the document was moved for the client to retry the request using the new URL (Location), access controls (Authorization) and others.

There are different request methods in HTTP, namely HEAD (identical to GET but without the response body), GET (asks for a document from the specified resource), POST (sends information to be processed to the identified resource), PUT (uploads a document to the specified resource), DELETE (erases the specified resource), TRACE (allows the client to monitor what intermediate servers are adding or changing information), OPTIONS (checks the functionality of a web server by specifying HTTP methods that the server supports), and CONNECT (used with a proxy that can change to being an SSL tunnel).

Conclusions

Computer protocols have evolved greatly over the past four decades. At the dawn of the computer revolution, network protocols faced serious constraints related to reliability and effectiveness of data transfer. However, modern day protocols allow transferring and retrieving of information at high speed ensuring data integrity.

Credits 

This research project is willingly provided by a highly qualified academic writer from SmartWritingService.com – one of the most popular writing companies on the market.