Elastos In A Nutshell: Carrier Network (Part 2)

Written By The Elastos In a Nutshell Team

IP Address Details And Limitations

There are currently two versions of the IP protocol: IPv4 (Internet Protocol version 4) and IPv6. IPv4 is still used predominantly, but it has a large flaw. The IPv4 address is 32-bits in size, which means there is a maximum of 4,294,967,296 IPv4 addresses. This number is far too little for the rate at which the internet is growing. There are roughly 7-8 billion people on this planet, so we do not even have enough IPv4 addresses to give each person an address!

IPv6 was in part created to account for this issue. The IPv6 address is 128-bits in size, containing 340,282,366,920,938,463,463,374,607,431,768,211,456 IPv6 addresses. The issue is that IPv4 is so ingrained that getting the internet to speak IPv6 would require a massive undertaking, one which is very slowly taking place. Instead of just switching over to IPv6, the Internet decided to use a band-aid solution while transitioning to IPv6 takes place. This solution is called Network Address Translation (NAT), which preserves the limited amount of IP addresses in IPv4 address space.

Network Address Translation (NAT) 

Network Address Translation is a method of turning one IP address into many. There is a range of IP addresses globally agreed upon that have been reserved for private usage. This range of IP addresses cannot be used to communicate over the internet with hosts in other networks. These address only allow devices to communicate within their own network. Taking the instance of a LAN, these private IP addresses are assigned by the network administrator (usually automated) to devices within the LAN. 

In order to talk with hosts in other networks, these devices need a public IP address access point. A public IP address is assigned to the LAN’s router by the ISP. Network Address Translation (NAT) allows the router or modem to translate these private addresses into a public IP address, and communicate with the internet on the devices behalf. Devices assigned these private address are often referred to as “behind” a NAT firewall. These devices are not visible to the rest of the internet; only the router they are connected to is visible. In order to communicate with a device behind a NAT firewall, IP datagrams are addressed to the public IP address of the gateway router, which then uses NAT to figure out the device for which the packet is designated. 

The function of NAT is the preservation of IPv4 Addresses across the internet. Instead of assigning each device a public IP address, only one is needed for a network with potentially hundreds of connected devices. Since the private IP addresses assigned to the devices behind the NAT firewall only matter in the context of the local network, the same ones can be used for devices in some other LAN.

While NATs do solve the issue of a limited IPv4 address space, they are a somewhat clunky band-aid solution that would not be needed if everyone switched to using IPv6. NATs have some major downfalls, one of which is that they cause havoc for p2p networks. P2P networks rely on direct connections. If a device is behind a NAT firewall, making a direct connection can be very difficult. Developers of p2p networks have had to produce solutions which are generally called NAT traversal techniques. We will see how Elastos Carrier uses NAT traversal techniques to allow for direct connections.

Problems With The Internet 

Notice how in this entire discussion there has been no mention of encryption, authentication, or really any substantial security features. There is a simple reason for this: the Internet was not created with security in mind. In fact, it was created without built-in security on purpose. The Internet was developed according to the model of a group of mutually trusting users attached to a transparent network. 

At the time, this seemed like a good idea, as the internet was built to connect universities and other trusted institutions. However, this is clearly not the case anymore. Almost no one on the internet is a trusted user, and the underlying infrastructure was never updated to account for this. Let’s detail some of the general vulnerabilities inherent in the Internet.

Encryption

The Internet was developed without built-in encryption. This means that by default, all data sent between hosts is totally transparent. Hackers exploit this through a method called “packet sniffing.” As the name suggests, packet sniffing involves intercepting packets and reading the data they contain. This data could include sensitive information such as passwords, SSNs, or other private data. 

Packet sniffing is particularly easy in wireless enabled networks. Packet sniffing devices can connect to a wireless network and record every packet that is sent out. They can then analyze these packets using sophisticated software to automatically collect any sensitive data. Since these devices are passive—in other words, they receive but don’t send—it is very difficult to detect them. Hackers can also compromise a business’s gateway router and intercept all packets going through it. You can even download free software such as Wireshark that does the packet sniffing for you.

Fortunately, there are two standardized cryptographic protocols that offer developers an opportunity to implement encryption. SSL (Socket Security Layer) was the first of these protocols to be created, although over time it was found to have too many exploits, and so it is now deprecated as a result. In its place, TLS (Transport Layer Security) was created. Although it is called Transport Layer Security, it technically lies above the transport layer. TLS is making some ground in providing secure encrypted connections, although it still has numerous attack vectors which are presented in detail HERE.

One of the reasons for the vulnerabilities in SSL/TLS is that it is implemented only at the discretion of application developers. If an application and its developers do not implement TLS, all data going through that application is unencrypted, and therefore not secure. Furthermore, when newer versions are released to account for exploits, developers need to patch and end users need to update their applications. TLS is not ingrained in the internet by default. Security flaws are constantly being found and patched, but updating can be slow. This is especially true for IoT devices. End users rarely stay on top of updating their IoT devices. 

Furthermore, TLS/SSL requires external institutions to verify authenticity. You may have noticed those green or black padlocks that sometimes appear on the address bar in your browser. These are certificates that signify that you are using a secure encrypted connection. However, because of the ease of intercepting packets en route, hackers can fake certificates, which leads users to believe they are visiting a secure site. 

To prevent this, there are several root Certificate Authorities that manually vet certificate applicants and also sign certificates with their private key. Browsers come pre-installed with the public keys of these root Certificate Authorities which allows the client to check if the certificates are valid. The result is that the root of trust for these secure connections are a handful of trusted entities. If any of these entities are compromised—which has happened on several occasions—the results can be devastating. The system of certificate issuance is actually quite complex, and allows ISPs, governments, and big corporations to track where users go on the web. 

Read about one of the largest CA hacks HERE

Authentication 

Denial Of Service Attacks:

Recall that the Internet was built for a group of mutually trusting users. The result is that all users and devices on the internet are whitelisted by default. Any user has the ability to send packets to any other user on the internet. There is not any sort of authentication before packets can be sent or received. The lack of authentication before packets can be sent out has lead to one of the largest issues the internet faces: Denial of Service (DoS) attacks. 

DoS attacks are prevalent precisely due to the lack of built-in authentication on the Internet. Anyone can send packets to any user, device, or server. Hackers take advantage of this by spamming other hosts with packets in the hopes of overwhelming them and rendering them useless. They can also open a huge number of TCP connections with a host to prevent it from accepting legitimate connection requests. There are many other types of DoS attacks, some of which only require a malicious entity to send a few packets in the right sequence to crash a host. 

Even worse than DoS attacks are DDoS (Distributed Denial of Service) attacks. Hackers can take control of many devices across the globe and use them as tools to spam a certain server or user. People have taken control of ten of thousands of devices, turning them into BotNets which spam whatever server or user the hacker targets. In response, certain services have sprung up to help reduce these attacks. Without these, the internet would grind to a halt. However, these are both costly and not totally effective. Furthermore, as the IoT grows in the coming years, the issue of DDoS will become exponentially worse. 

Several years ago, a hacker took control of 100,000 IoT devices and used them to shut down a large portion of the internet for a day by DDoSing one of the largest DNS servers in the world. The coordinated attack was spamming packets at a rate of 1.2 TB per second! As the number of IoT devices is projected to reach near the trillion mark by 2022, this is going to be a serious problem going forward, especially as more and more mission critical services are connected to the internet which cannot afford to go down for even a very short time. 

https://www.theguardian.com/technology/2016/oct/26/ddos-attack-dyn-mirai-botnet

Identity Masquerading:

Related to the issue of authentication is user identity. While certain apps and companies provide usernames that only matter in the context of their native application, we are at the whim of those who issue them. There is no Internet-wide username or ID that is as open as the internet is meant to be. The IP address is the closest thing to an internet ID, but this is still insufficient.

In fact, something called IP spoofing is relatively easy to do on the current internet. Anyone can create a packet with someone else’s IP address as the source and pretend to be that user. The internet will deliver that packet with no questions asked from whatever source IP address the hacker decides to use. The receiver then thinks it is connecting to someone it trusts, and will receive potentially malicious data.

Malware Spread

Due to the ease of sending packets to anyone with an internet connection, malware can spread quickly throughout the internet. A hacker can send out thousands of links pretending to be something harmless, while actually containing binary executables that download malware onto the host’s device. There are two main types of malware:

  • A Virus is a piece of malware that requires user interaction to download itself onto the host’s device. A Virus can be a program that searches the host’s device for sensitive information such as passwords or a keylogger or screen capture program that records everything the user does.
  • A Worm is similar to a virus but does not necessarily require user interaction. These are usually acquired by running a vulnerable network application. Both viruses and worms are usually self replicating, meaning that once they infect a host, they will search for other vulnerable hosts and send the malware to them too.

So how can authenticated connections help? First, authenticated connections greatly inhibit the contagious attribute of malware. If a user downloads some malware, the self-replication mechanisms would only be able to send out the malware to hosts which are whitelisted by the user that downloaded it. Additionally, requiring authenticated connections would make it far harder for hackers to actually acquire the sensitive information they desire. A user may download a virus, but in order for the hacker to get the sensitive information (passworlds, SSN etc.), the program has to send the information to some outside server controlled by the hacker. This connection needs to be authenticated, which adds a layer of difficulty to this process.

Man In The Middle Attacks

The lack of built-in encryption and authentication is largely responsible for a class of attacks known as Man In The Middle (MITM) attacks. This class of attacks is probably the most widespread of all, as it has to do with intercepting packets en route to their destination. Besides the lack of built-in encryption and authentication, there is another reason which makes this class of attacks so ubiquitous. The reason has to do with predictability in routing due to the very centralized nature of the current Internet. These centralization issues are a result of the pervasiveness of the client-server model, which we will address in the coming section.

Payment Layer

The Internet is also in desperate need of a native payment layer that can handle micropayments. In order to purchase anything on the Internet, we need to enter sensitive bank or credit card information. While TLS generally does a good job with encryption, there are plenty of other attack vectors for hackers to exploit. Not only that, but entering this information is cumbersome. The settlement process going on in the background is also very complicated, even if the purchase is of very small magnitude. This makes it hard to implement features like tipping and contributes to the subscription model’s current dominance. Furthermore, people are generally hesitant to enter their credit card information to buy things from newer companies, which makes it hard to compete with the big players.

Network Architectures

Client-Server Model

A client is an application on a computer that requests information from another computer over a network. Servers are programs on computers that provide requested information or services back to a client. A server is like a vending machine. A person approaches the vending machine and enters the code for an item, and the vending machine responds by serving up the requested item.

 

There are a lot of different applications that use the client-server model, such as the World Wide Web, file transfer, and email. In the case of the web, the browser on your computer is the client. When you enter the address of a web page into the address bar, the browser uses the internet to request information. A web-server responds by serving up the pages to your browser.

Clients are typically simple processes running on end user devices, while servers are usually a multitude of powerful devices housed in large data centers. Servers are required to be powerful because they need to handle thousands of requests from many different clients at any given time. They are always switched and exist solely to share their resources with requesting clients. You can imagine just how resource intensive and expensive a server can be to run. 

Now that we know what clients and servers are, let’s get into the client-server architecture. Remember, a network architecture describes how computers on a network are organized and how tasks are allocated among them. In the client-server architecture, the heavy work is done by the server so that the client computer needn’t be powerful at all. For this reason, the client-server network architecture is centralized, because it hinges upon servers at central locations.

Servers can offer many different services to applications like file storage, file transfer, HTTP responses, retrieving emails, and so on. While the client-server model has many benefits, it brings about a lot of issues. As we proceed, we will see how the client-server model has acted as a centralizing force that turned the once decentralized internet into an archipelago of servers.

Why is the C-S model so popular?

The client-server model is by far the most popular computer networking architecture used on the web, and there are three major reasons for this: security, speed and business. 

  • Security: This is probably what first caused the client-server model to become popular. As discussed in the previous section, the internet was built with very little security in mind. As time progressed, security became more and more of an issue, which caused users to migrate to fewer and fewer trusted applications. For these applications to serve a rapidly growing number of users, bigger and more powerful servers were created.
  • Speed: End user devices used to be very weak and the average consumer bandwidth was very low. Servers allow applications access to powerful shared resources, providing weak end-user devices the ability to run fast and powerful applications. 
  • Business: The client-server model greatly benefits those who control the servers. It concentrates power in the hands of a few companies. Fast forward to current day, and pretty much all internet traffic goes through a handful of company servers. We will see shortly how business plays a crucial role in the C-S model’s ubiquity.

The client-server model has continued to dominate the web over the years largely due to the rise in e-commerce and social media platforms. We are going to spend some time on the history of the internet’s development in order to illustrate how business has completely transformed the landscape of the internet through the World Wide Web. It’s our hope that through this story you can appreciate the potential of Elastos Carrier and realize the imperative of returning the Internet to its decentralized state for good.

 

It’s story time!!
First, let’s clarify the terms “Internet” and “Web.” Most people use these terms interchangeably, but they are not the same thing. The Internet is a network of networks that allows a computer in one network to communicate with a computer in another network. The Web is built on top of the Internet, and is essentially the internet’s first killer application.

Once upon a time, we had decentralization…

When the Internet was first created, it was totally decentralized. In a decentralized network, each computer is treated as having equal value and acts independently of other computers. There are no central points of failure, so if one computer should fail the network is unaffected. The Internet was created with the purpose of getting data from one computer to another. Many experts have described the internet as ‘dumb,’ and this isn’t necessarily a bad thing. The dumb internet expression merely conveys that each piece of data sent over the network of networks is treated with equal value and weight regardless of what it contains. The lower levels of the Internet Protocol stack are still decentralized, but as you get closer to the application layer you’ll find that it’s a much different story. 

The World Wide Web

It wasn’t until the creation of the World Wide Web (hats off to Sir Tim Berners-Lee) that the Internet really took off. Tim brought together ideas of clickable links (Hyperlinks), Uniform Resource Locators (URLs), and Hypertext Markup Language (HTML). HTML allowed for the creation of more visually aesthetic websites, unlike the screens full of text of one font type, size and color. While Tim was at CERN, the European Particle Physics Laboratory, he wrote the first web server program and created the first ever client program in the form of a browser. This was the first proper implementation of the client-server model. 

While the World Wide Web propelled the Internet’s massive growth, it marked the beginning of a slow march toward the centralized Internet we know today. In the next part, we will explore some of the ramifications of the C-S model, and how it has contributed to turning the Internet into a giant data harvesting tool.

For further information in video format, check out Famous Amos on YouTube [HERE]

 

Authors

Charles Coombs-Esmail[u/C00mbsie on reddit]

Amos Thomas[Famous Amos on youtube]

Michael Ekpo[adeshino on discord

 

 

Eric Coombs Esmail

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here