**THE RAPID GROWTH OF THE NET*
The creation and interconnection of computer networks by scientists and engineers is where the Internet got its start. Its history, domain name system, meaning, functions, network packets, FTP, HTTP, and HTTPS requests are some of the features we get to look at. It originated from development and research in the United States that involved international cooperation.
A bit on my understanding about the net is that I am very much aware that it serves as a platform for cooperation and communication between people and computers, as well as a worldwide broadcasting capacity and information distribution method. It further serves as a successful illustration of the advantages of consistent funding and dedication to information infrastructure research and development. As I browsed through cloudfare, I ran across the four main focuses of the Internet's history which are technological advancement, administration and operations, social elements, and commercialization.
Early studies on packet switching and the ARPANET marked the beginning of technological progress, and ongoing research is extending the capabilities of infrastructure in areas like scalability, performance, and higher-level functionality as Tekmart further stated. A vast and complicated operational infrastructure is required for operations and administration, and a large community of Internet users where they collaborate to develop and advance the technology as a consequence of social factors. The efficient transfer of research findings into a widely accessible and widely deployed information infrastructure is known as commercialization. Many today still refer to the Internet as the first prototype of the National Information Infrastructure, but it is really a widely used information infrastructure today. Technology, organization, and community are just a few of the numerous facets that make up its complicated past.
Since we use online technologies more and more on a daily for electronic commerce, information gathering, and community activities, its impact is seen not only in the technical domains of computer communications but also in general culture.
Tracking back to the history of the internet I further discovered that writing about the "Galactic Network" concept by MIT's J.C.R. Licklider provided the first documented account of social interactions made possible by networking in the year 1962. His vision was of a network of globally connected computers, similar to the Internet of today(IOT), that would provide instant access to information and applications from any location. Licklider served as the first director of DARPA's computer research program and persuaded Ivan Sutherland, Bob Taylor, and Lawrence G. Roberts, among others, of the value of this networking strategy.
I further discovered that the first packet switching theory paper was published in 1961 by MIT's Leonard Kleinrock, and the first packet switching theory book was released during the year 1964. A significant step toward computer networking was taken at a time when Kleinrock persuaded Roberts that communications using packets rather than wires may theoretically be possible. Roberts established the first wide-area computer network which was ever constructed in the year 1965 when he used a low-speed dial-up telephone connection to link the Q-32 in California and the TX-2 computer in Massachusetts as stated by cloudfare. As a matter of fact, it became clear that time-shared computers might function well together, with the remote computer running applications and getting data as needed.
Lastly was the internet’s discovery during the year 1969 by BBN, being the first node of the ARPANET which was located at UCLA's Kleinrock Network Measurement Center. This was when a second node was further established at Stanford Research Institute (SRI) and an Elizabeth Feinler-led Network Information Center were linked to the first host computer, which was connected to the first IMP at UCLA. The Internet was just being started, with four host computers linked to the ARPANET by the end of 1969.
Using the network and its underlying structure were the main topics of network research. In December of 1970, the Network Control Protocol also known as the (NCP), the original ARPANET Host-to-Host protocol, was said to be finished. Electronic mail was then launched as the first "hot" technology at the International Computer Communication Conference (ICCC) in 1972 as was also stated by Techtarget, when the ARPANET was first shown off in public. This further marked off the growing of what is known as "people-to-people traffic" on the internet today.
WHAT IS A DOMAIN NETWORK SYSTEM(DNS) AND WHAT ARE SOME OF ITS FEATURES
It is well known that from servers the host content for large retail websites to your smartphone or laptop, every computer on the Internet uses numbers to locate and connect with one another. We call these figures IP addresses. This further comes to my understanding that when login to the internet DNS allows you to visit a website without having to memorize and input a lengthy number when one launches or works with a web browser. You can still get to the correct location by using a web address like example.com instead.
DNS services, like Amazon Route 53 for instance, are geographically wide services that convert human-readable domain names, such www.example.com, to the numerical IP addresses which computers use to communicate with one another. The DNS system on the Internet controls the mapping between IP addresses, much like a telephone directory. When a user types a domain name into their web browser, DNS servers convert those requests for names into IP addresses, thereby managing which server the user will access. Queries are what we call these requests.
The Domain Name System is divided into two parts namely Authoritative DNS and Recursive DNS. When it comes to Authoritatve DNS I further understand that this is where developers can control their public DNS names by updating mechanism offered by an authoritative DNS provider. After then, it further responds to DNS queries, converting domain names into IP addresses so that devices can communicate to one another. Recursive DNS servers rely on authoritative DNS to respond with the IP address information since it holds the highest authority over a domain. A DNS system with authority is for instance in this case an Amazon Route 53.
Recursive DNS on the other hand is usually where clients don't query authoritative DNS services directly. As an alternative, they often try to find a connection with a resolver, also referred to as a recursive DNS service. Recursive DNS services function similarly to a hotel keeper as stated by the Amazon sample in that they act as a go-between who can get these DNS records even though they don't control any of them, and also the brief of what sets the two apart.
The above image is a demonstration by Amazon that illustrates what is the process by which DNS Routes Requests to Your Website and on how Domain Name System works in particular:
• The first part of the demonstration shows when a web browser starts, this is where the user types www.example.com into the address bar and presses Enter.
*• Secondly, is where the user accesses a web browser, the user types www.example.com into the address bar and presses the Enter key. *
*• An Internet service provider (ISP), such as a cable Internet provider, DSL broadband provider, or business network, is usually in charge of managing the DNS resolver that receives the request for www.example.com. *
• www.example.com requests are forwarded to a DNS root name server by the Internet Service Provider's DNS resolver.
** • An additional TLD name server for.com domains receives the request for www.example.com from the ISP's DNS resolver. The four Amazon Route 53 name servers linked to the example.com domain are named by the name server for.com domains in response to the request.**
** • The Amazon Route 53 name server is selected by the ISP's DNS resolver, which then routes the request for www.example.com to it.**
** • The IP address of a web server, 192.0.2.44, is one example of the value that the Amazon Route 53 name server retrieves when it searches the example.com hosted zone for the www.example.com record. It then sends the IP address back to the DNS resolver**
** • TheIP address that the user requires is finally available through the ISP's DNS resolver. The web browser receives that value back from the resolver. Additionally, the DNS resolver caches or stores example.com's IP address for a duration that you designate, enabling it to react faster when someone visits example.com later.**
** • The web browser uses the IP address it obtained from the DNS resolver to send a request for www.example.com. Here's where your material resides, like on an instance of Amazon EC2 hosting a web server or a bucket on Amazon S3 set up as an endpoint for a website.**
** • Lastly is when the internet browser displays the webpage for www.example.com after receiving it from the web server or another resource at an IP address 192.0.2.44.**
**WHAT IS A NETWORK PACKET AND ITS WORKS**
A packet is a brief section of a bigger communication in networking. Packets are used to separate data being delivered via computer networks, like in this case, the Internet. The machine or device that receives these packets then reorganizes them.
Let's say for instance writer A is writing a letter to receiver B, but receiver B can only fit envelopes that are the dimensions of a small index card through their mail slot. Writer breaks up their message into much smaller chunks, each only a few phrases long, and writes these portions out on index cards rather than putting it on regular paper and then attempting to fit it through the postal slot. Receiver B then gets the stack of cards from the writer, and arranges them so they can read the entire message.
This is comparable to the operation of Internet packets. Let's say a user has a picture to load right?, well in this instance the picture file will not be transferred in its whole form from the internet server to the consumer's machine. Rather, it will be divided into data packets which will be transmitted across the Internet's lines, cables, and radio waves, and then pieced back together into the original image by the user's computer.
A collection of at least two computers that are connected is known as a network. The worldwide web is a network of networks, made up of numerous global networks that are linked to each other.
Some of the questions that might pop up next in mind is how we can use these network packets. It might be applicable, in theory, to transfer files and data across the Internet without breaking them up into discrete information packets. A lengthy uninterrupted line of bits, or discrete information units sent as electrical pulses that computers can understand, could be sent from one computer to another.
However, if a number of computers are involved, this strategy quickly becomes inapplicable. No third computer could utilize the same connections to convey data while the lengthy line of bits travelled between the two computers, it has to wait its turn.
The Internet operates as a "packet shifting" network compared to this methodology. The ability of network equipment to handle packets independently of one another is known as packet switching. Additionally, it implies that packets can go via several network paths to reach the same location as long as they all get there. (In certain protocols, even if each packet travelled a different path to get there, they still must land at the end point in the correct sequence.)
According to TechTarget it is well stated that packets from different computers can flow across the same lines in virtually any order thanks to packet switching. This makes it possible for several connections to occur simultaneously over the exact same network device and briefly becomes responsive in that manner. Hence billions of devices today are able to share data across the Internet at the same pace.
**ROLE PLAYED BY PROTOCOLS IN DATA TRANSMISSION**
An effective, safe, and precise interaction between electronic devices is ensured by protocols, which are said by google to be important guidelines and practices for data transfer in networks. Error checking during transmission, security, and data integrity are their main responsibilities. By including a checksum, they guarantee that the data delivered and received by the source and destination devices are same. In the event that the checksums do not match, information is retransmitted and is deemed intact.
According to Tutorchase, I understand that with protocols like Secure Sockets Layer (SSL) and Transport Layer Security (TLS) data encryption happens before transmission and decoding it upon receipt, security is yet another essential component of data transmission. In addition, they control the speed and size of data transferred, dividing big files into manageable packets for simpler transfer and re-organizing them when they get to their destination. In order to avoid information overload at the receiving device, they are also known to regulate the data transfer rate. Data packets' travel from origin to destination is determined by protocols, especially across large networks with numerous alternative routes. In current digital age, protocols are essential for communication and data transmission.
**MORE ABOUT FTP, HTTP & HTTPS REQUEST PROTOCOLS**
There are 12 common network protocols and amongst the 12 appears three that caught my attention, namely FTP also known as File Transfer Protocol, HTTPS also known as Hypertext Transfer Protocol and lastly is HTTPS which is also the same as HTTP but is more secured especially when it comes to cyberattacks.
When we look into File Transfer Protocol also known as FTP, we see that this is where a client sends a request for a file, and in return the server responds. FTP uses a data channel and a command channel to exchange files and communicate. It operates over the TCP/IP suite of communications protocols. Through the data channel, clients can download, modify, and copy files in addition to performing other operations. Through the command channel, clients can request files.
Since the majority of computers started using HTTP for file sharing, according to Tekmark it is said that FTP has become less common. Nonetheless, FTP is a widely used network protocol for sharing files in a more confidential setting, like banking.
As we move to HTTP , similar to FTP, I understand that HTTP is a TCP/IP-based file transfer protocol that is mainly used by internet browsers and is widely known by consumers. HTTP allows access when a user types in an internet domain and wants to visit it. The HTML code that organizes and presents the page's design is requested by HTTP when it establishes a connection with the domain's server.
HTTPS, or HTTP over Secure Sockets Layer, is an additional variation of HTTP. Users' webpages and HTTP queries can be encrypted with HTTPS. According to Tekmark, it we recognize that users will feel more protected as a result, and common cybersecurity risks like man-in-the-middle attacks may be avoided.
**THE FOURTH INDUSTRIAL REVOLUTION:**
As we continue journeying along some of the updated features, the above are what made and still make the internet to be what it is today and these keep on getting better secured as technology grows over the 4th industrial revolution.