What is Virtualization?

No advancement in information technology recently has offered more benefits than virtualization. Many IT professionals think of virtualization in terms of virtual machines (VM) and their associated hypervisors and OS implementations, but that is not all. An increasingly broad set of virtualization technologies, capabilities, strategies and possibilities are redefining major parts of IT in organizations everywhere.

New software constantly demands more, from applications to operating systems. They need more data, more processing power, more memory, and virtualization makes a single machine act like many.


What is virtualization?

Defining virtualization

It is described as the process of making the function of an object or resource simulated or emulated in software similar to that of the corresponding physical object. In other words, we utilize abstraction to make software look and behave like hardware, with similar benefits in flexibility, price, scalability, dependability, overall capability and performance, in a broad range of applications. Virtualization, therefore makes “real” that which is not, implementing the flexibility and convenience of software-based capabilities and services as a substitute for the same realized in hardware.

This involves making a singular physical resource (such as a server, an OS, an application, or storage device) appear to work as multiple virtual resources; it can also include making many physical sources (such as storage devices or servers) appear as a singular virtual resource. It also refers to the creation of a virtual resource such as a server, desktop, operating system, file, storage or network.

In simple terms, virtualization is often defined as:

  • The creation of multiple virtual resources from one physical one.
  • The making of one virtual resource from one or more physical resources.

What is the main aim of virtualization?

It is to manage workloads by basically transforming traditional computing to make it more scalable. It has been a part of the IT scene for years, and it can be applied to a variety of system layers, including OS-level virtualization, hardware-level virtualization, and server virtualization. A common form of virtualization is the operating system-level virtualization. Here, it is possible to run multiple OS on a single piece of hardware. Virtualization tech involves parting the physical hardware and software by emulating hardware using software. When a different OS is running on top of the primary OS by means of virtualization, it is called a virtual machine.

What is a virtual machine?

It is the emulated equal of a computer system that operates on top of another. Virtual machines can have access to a number of resources:

  • Processing power, through hardware-aided but limited access to the host CPU and memory;
  • One or more physical or virtual disk machines for storage; 
  • A virtual or real network interface; 
  • Devices such as video cards, USBs, or other hardware that are shared with the virtual device. 

If the virtual device is stored on a virtual disk, it is referred to as a disk image that contains the files needed for a virtual machine to boot or other specific storage needs. A virtual machine is a data file on a computer that can be transferred and copied to another computer, just like an ordinary data file. The machines in the virtual setting use two types of file structures: one specifying the hardware and the other defining the hard drive. The virtualization software, or the hypervisor, allows caching technology that can be used to store changes to the virtual hardware or the virtual hard disk for writing at a later time. This technology lets a user discard the modifications done to the operating system, allowing it to boot from a known state.

What is a hypervisor?

A hypervisor is a program that generates and runs virtual machines. They are split into two classes: 

  • Type 1:- This kind of hypervisor is also known as native or bare metal. They run on the hardware directly with guest OS running on top of them.
  • Type 2:- This kind runs on top of an existing OS with guests running at a third level over hardware. 

In modern systems, this difference is less widespread, especially with systems like KVM. KVM, short for a kernel-based virtual machine, is a part of the Linux kernel that can run virtual machines directly, although you can still use a system running KVM virtual machines as a normal computer itself.

How virtualization works?

Virtualization represents a technology in which an application, a guest operating system or data storage is abstracted from the underlying hardware or software. A key use of this technology is server virtualization, which uses a hypervisor to emulate the latent hardware. This includes the CPU’s memory, input/output, and network traffic. The guest OS, usually interacting with the true hardware, is now doing so with a software version of that hardware, and the guest operating system has no idea it’s on virtual hardware. While the performance of this system is not equal to that of the OS running on true hardware, virtualization works because most guest OS and apps don’t need the full use of the underlying hardware. This provides for greater flexibility, control, and isolation by removing the dependence on a given hardware platform. While originally meant for server virtualization, this concept has spread to applications, networks, data, and desktops.


What’s the difference between cloud and virtualization?

At first glance, they may sound the same, but each has a wider definition that can be applied to many different systems. Both virtualization and cloud computing are virtual in the sense that they rely on similar models and principles. However, they are very different.

  • Virtualization is the replacement of some physical part with a virtual one. Within this general definition, there are specific types of virtualization, such as virtual storage devices, machines, operating systems and network elements for network virtualization. Virtualization simply means that someone built a model of something, such as a machine or server, into code, creating software that acts like what it’s emulating.

Network virtualization is the closest kind of virtualization similar to cloud computing. Individual servers and other components are substituted by logical identifiers, rather than physical hardware pieces. Network virtualization is utilized for testing environments and actual network implementation.

  • Cloud computing, however, is a particular kind of Information Technology setup that involves various computers or hardware parts sending data through a wireless or IP-connected network. In most cases, cloud computing involves sending inputted data to remote locations through a somewhat abstract network trajectory known as “the cloud.” 

In summary, cloud computing is a reference to specific kinds of vendor-provided network services, where virtualization is the more overall process of substituting tangible devices and controls with a system where software manages more of the network’s processes.


What are the types of virtualization?

Types of Virtualization

Virtualization can be classified into different layers: desktop, server, file, storage, and network. Each layer has its own advantages and complexities. The technology has many benefits, including low or no-cost deployment, full resource utilization, operational cost, and power savings. Deploying virtualization technology needs through planning and skilled technical experts. Since the virtual devices use the same resources to run, it may lead to slower performance.

1. Network Virtualization/ Virtual Networks

It is a way of combining the available resources in a network by splitting up the available bandwidth into channels, each independent of the others and can be assigned to a precise server or device in real time. The idea is that virtualization masks the true complexity of the network by dividing it into manageable parts, like how your partitioned hard drive makes it easier to manage your files.

Using the internal interpretation of the term, desktop and server virtualization solutions give networking access between the host and guest as well as among many guests. On the server side, virtual switches are being accepted as a part of the virtualization stack. The external definition is probably the more used version of the term. Virtual Private Networks (VPNs) have been part of the network administrators’ toolbox for years with most companies allowing VPN use. Virtual LANs (VLANs) is another generally used network virtualization concept. With network advances like 10 Gigabit Ethernet, networks no longer need to be structured purely along geographical lines.

The pros of Network Virtualization include:
  • Customization of Access – Administrators can customize access and network options such as bandwidth throttling and quality of service quickly.
  • Consolidation – Physical networks can be merged into a single virtual network for overall simpler management.

Like server virtualization, network virtualization can bring increased complexity, some performance overhead, and the need for administrators to have a larger skill set.

2. Storage Virtualization/ Virtual Storage

It is the merging of physical storage from various network storage devices into what appears to be an individual storage device that is managed from a central console. It can also be described as the process of abstracting logical storage from physical storage. While RAID (redundant array of independent disks) at the basic level affords this functionality, the term typically includes additional concepts like data migration and caching. Storage virtualization is hard to define fixedly because of the many ways that the functionality can be granted. Typically, it is given as a feature of:

  • Host-based with special machine drivers
  • Array Controllers
  • Network Switches
  • Stand Alone Network Appliances

Each merchant has a different approach in this regard. Another way that storage virtualization is classified is whether it is in-band or out-of-band. 

  1. In-band (often referred to as symmetric) virtualization sits between the host and the storage device allowing caching. 
  2. Out-of-band (often termed as asymmetric) virtualization makes use of special host-based device drivers that first lookup the metadata (indicating where a file is) and then lets the host retrieve the file directly from storage. Virtual caching is not possible with this approach.
General perks of storage virtualization include:
  • Migration – Data can be migrated easily between storage locations without disrupting live access to the virtual partition with most tech.
  • Utilization – Similar to server virtualization, use of storage devices can be balanced to address over and under use.
  • Management – Hosts can leverage storage on one physical machine that can be centrally managed.
Some of the cons of storage virtualization include:
  • Lack of standardization and Interoperability – Storage virtualization is still a concept and not a standard. As a result, a lot of the virtualization software does not easily interoperate.
  • Metadata sensitivity  – Since there is a mapping between the logical and physical location, the storage metadata and its administration becomes key to a reliable working system.
  • Backout – The mapping connecting local and physical locations also makes the back out of storage virtualization technology from a system less than a simple process.

3. Server Virtualization

It is the masking of server resources, including the number and identification of individual physical servers, processors and operating systems, from server users. The aim is to spare the user from having to understand and manage complex details of server resources while enhancing resource sharing and utilization and maintaining the capacity to expand later. At the core of such virtualization is a hypervisor (virtual machine monitor). It is a thin software layer that catches operating system calls to hardware and typically provides a virtualized CPU and memory for the guest users running on top of them. Server virtualization has a number of benefits for the corporations making use of the technology. Among those frequently listed:

Advantages of Server Virtualization:
  • Increased Hardware use– This results in hardware saving, reduced administration overhead, and energy savings.
  • Additional Security – Clean disk images can be utilized to restore compromised systems. They can also provide sandboxing and isolation to curb attacks.
  • Development – Debugging and performance monitoring situations can be easily set up in a repeatable form. Developers also have easy access to OS they might not otherwise be able to install on their desktops.
Downsides of Server Virtualization:
  • Security concerns – With more entry points such as the hypervisor and virtual networking layer to observe, a compromised image can also be propagated easily with virtualization technology.
  • Additional administration skills – While there are less physical machines to maintain there are more machines in total. Such maintenance may require new skills and experience with software that administrators otherwise would not need.
  • Additional Licensing costs – Many software-licensing schemes do not take virtualization into account. For instance, running four copies of Windows on one box may require four separate licenses.
  • Non-optimal performance – Virtualization partitions resources such as RAM and CPU on a physical machine. This combined with hypervisor overhead does not produce an environment that focuses on maximizing performance.

4. Data virtualization

Data virtualization is abstracting the traditional technical aspects of data and its management, such as location, performance or format, in favor of extended access and more resiliency tied to business needs. Data that is spread all over can be consolidated into a single source through Data virtualization. It allows companies to treat data as a dynamic supply, providing processing capabilities that can combine data from multiple sources, easily accommodate new data sources, and transform data according to user needs. 

5. Desktop virtualization/ Virtual desktop infrastructure

It is virtualizing a workstation load rather than a server. This lets the user access the desktop remotely. It is a server-centric computing model that uses the traditional thin-client model but is made to give administrators and end users the ability to host and centrally control virtual desktop devices in the data center while giving the end users a full PC desktop experience.

Desktop virtualization can be broken down into two:

  • Hosted Desktop Virtualization
  • Local Desktop Virtualization
Advantages of desktop virtualization

The advantages of desktop virtualization include most of those in application virtualization as well as:

  • High Availability – Downtime can be reduced with replication and error tolerant hosted configurations.
  • Extended Refresh Cycles – Larger capacity servers, and limited requirements on the client PCs can increase their lifespan.
  • Multiple Desktops – Users can obtain multiple desktops suited for various tasks from the same PC client.

Drawbacks of desktop virtualization are comparable to server virtualization. There is also the added drawback that clients must have network connectivity to access their virtual desktops. It is problematic for offline work and also increases network demands at the office.

Hosted desktop virtualization

Hosted desktop virtualization is like hosted application virtualization, expanding the user experience to be the entire desktop, while local desktop virtualization has also performed a key part in increasing the success of Apple’s shift to Intel processors because products like VMware Fusion and Parallels allow for easy access to Windows applications. Some the perks of local desktop virtualization are:

  • Security – Organizations can lock and encrypt only the valuable or sensitive content of the virtual machine/disk. This is better than encrypting a user’s entire disk or operating system.
  • Isolation – Virtual machines allow corporations to isolate corporate assets from third-party devices they do not control. This enables employees to use personal computers for corporate use in some instances thus acting as an additional security feature.
  • Development/Legacy Support – It lets a user’s computer support various configurations and environments it would otherwise not be able to support without multiple hardware additions or a host OS. Examples include running Windows in a virtualized environment on OS X and legacy testing Windows 98 support on a machine with Windows Vista.

6. Application Virtualization

It is abstracting the application layer away from the operating system. This way the app can operate in an encapsulated form without depending upon the OS underneath. This allows a Windows application to run on Linux OS and vice versa, in addition to adding a level of isolation. It differs from operating system virtualization in that in the latter case, the whole OS is virtualized rather than only specific applications.

Application virtualization categories

Application virtualization can be broken out into two categories:

  • Local Application Virtualization/Streaming
  • Hosted Application Virtualization

With streamed and local application virtualization an app can be installed on demand as needed. If streaming is permitted, then the parts of the application required for startup are sent first, optimizing startup time. Locally virtualized apps also frequently use virtual registries and file systems to maintain separation and cleanness from the user’s physical machine. Examples include Citrix Presentation Server and Microsoft SoftGrid.  Hosted application virtualization lets the user access applications from their local device that are physically running on a server elsewhere on the network. Technologies such as Microsoft’s RemoteApp let the user experience to be relatively seamless and include the ability for the remote application to handle local file types.

Benefits of Application virtualization:
  • Added Security – Virtual applications often run in user mode isolating them from OS level functions.
  • Easy Management – Virtual applications can be managed and patched from a central location. 
  • It has Legacy Support – Through virtualization, legacy applications can be run on modern operating systems they were not originally designed for.
  • More Access – Virtual applications can be installed on demand from primary locations that provide failure and replication.
Shortcomings of Application virtualization include:
  • Packaging – Applications must first be packaged prior they can be used.
  • Additional resources are needed– Virtual applications may need more resources regarding storage and processing power.
  • Compatibility – Not all apps can be virtualized easily.

Virtualization can be seen as part of a general trend in enterprise IT that includes autonomic computing, a situation in which the IT environment will be able to manage itself based on observed activity, and utility computing, in which machine processing power is seen as a service that clients can pay as needed. The usual aim of virtualization is to centralize administrative tasks while growing scalability and workloads.

Scroll to Top