Cloud hosting has become a popular choice in the business world for a plethora of reasons. It is elastic, with the capacity to draw on the capacities of thousands of different machines as needed.
The interconnected architecture of a cloud system also means that it is highly redundant. Furthermore, access to a large pool of hardware enhances the speed of the cloud. In fact, according to Dr. Geoffrey C. Fox of Indiana University, this technology frequently can process data faster than a supercomputer. In the early days of the Internet, there were two forms of hosting: dedicated and shared – both of which are still used. With dedicated hosting, companies could buy or lease an entire server computer that was “dedicated” only to their use. With shared hosting, multiple companies shared the resources of one machine to reduce upfront and ongoing costs.
Dedicated and shared hosting fit many companies and situations perfectly. However, other companies wanted greater strength and privacy than shared hosting allowed, without having to incur the expense of a dedicated server. The virtual private server (VPS) was created to fill this need. A VPS was designated using virtualization technology, which allowed a portion of a machine to have the same characteristics as a server (such as its own operating system, rebooting capacity, and specified resources).
Originally a number of different virtual private servers were stored on one piece of hardware, a physical server. However, cloud technology began to emerge, a field that sought to move computing away from dependence on individual physical devices. The cloud utilized a broad array of physical hardware to perform tasks. In the case of cloud VPS technology, the same distinctions and parameters were applied: each user could dually benefit from private virtualization and a broad, robust distribution of resources.
The strengths and weaknesses (challenges, really) of cloud hosting were readily apparent to computer scientists. The use of a large amount of physical hardware made virtual machines incredibly, unprecedentedly fast (as described above). It also meant that available resources on machines were not being underutilized, so efficiency was enhanced. Because efficiency was improved with these systems, costs were cut as well. Access to thousands of machines exponentially improved the reliability available to hosting customers and automated multiple redundancies into providers’ plans.
What many people have viewed to be the Achilles’ heel of the cloud is security. The common knowledge is that it’s easier to protect one device than 1000 of them. However, the capabilities of protections rapidly developed to meet enterprise-level requirements. As reported in Forbes in 2011, Gen. Keith Alexander, who heads the National Security Agency and Cyber Command of the US federal government, studied the latest cloud models and found it suitable to safeguard the Department of Defense.