Application Virtualization

Application virtualization (App-V)

Thomas Olzak , ... James Sabovik , in Microsoft Virtualization, 2010

Publisher Summary

Microsoft Application Virtualization, also known as App-V, is a component of the Microsoft Desktop Optimization Pack. It allows for easier management and maintenance of the application since it technically resides on a platform separated from the operating system of the client device. The purpose of the App-V Management Server is to deliver prepackaged and configured applications in an "on-demand" fashion to a workstation running the App-V Desktop, and Terminal Services clients. The App-V Management Server uses Microsoft SQL Server for its data store. Multiple App-V servers can share a single data store. The App-V server authenticates requests, provides security, metering, monitoring, and data gathering. Active Directory is used to manage users and applications. The Microsoft Application Virtualization Streaming Server provides a source of package content for client computers that are in a remote office away from the Management Server. The package content files are often large, sometimes up to 4 GB in size, so in a production environment they should be placed in a content share that is accessible by the client computers over a high-speed local area network. Streaming very large files across a wide area network (WAN) is not recommended because of the typical bandwidth limitations of WAN links.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781597494311000114

How Virtualization Happens

Diane Barrett , Gregory Kipper , in Virtualization and Forensics, 2010

Application Virtualization

Application virtualization is based on a concept different from virtualizing hardware with methods such as Type 1 and Type 2 virtualizations. In application virtualization, the virtualization concept involves a local device and an actual installed application in a remote location. Application virtualization is also called thin client technology. In this type of technology, the local device provides physical resources such as the CPU and random access memory that may be required to run the software, but nothing is installed locally on the device itself. The underlying principle is to have the application and OS separated, so the application is OS independent. In these implementations, although the user interface is transparent, the virtual application runs locally, and the application runs remotely. This technology is mostly deployed in corporate environments. Examples of this type of implementation are Microsoft terminal services, Citrix, and PanoLogic.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781597495578000011

Deploying App-V packages

Thomas Olzak , ... James Sabovik , in Microsoft Virtualization, 2010

What is an App-V package?

An App-V package is the next generation of an application installation. Apart from some specialized scenarios, most applications prior to the introduction of App-V were simply "installed" on a user's workstation and the state of the installation remains largely static unless the user or their network administrator choose to force the application to upgrade, update, etc. An App-V package is much more dynamic in that it can be custom designed to

Reside completely on a user's workstation

Partially on a user's workstation and partially on a server

Completely on a server only allowing access to the application from the user's workstation

And many variances in-between.

This approach allows you to efficiently maintain an App-V package. For example, an App-V package can be designed to install completely on a user's workstation and still regularly "check-in" to the App-V infrastructure to look for updates, and then apply those updates in the background without impacting the user's experience of the application.

Isolating an application addresses application compatibilities that otherwise would make it impossible to run two applications on the same workstation. An example of this is two different applications, each requiring a different version of the Java runtime. Prior to App-V, installing both applications on the same desktop caused pain and frustration. App-V allows you to package an application together with its prerequisites, and then stream the collective package to a workstation without the need to actually "install" anything on the user's workstation. You can do this for any or all applications, including two different versions of the same program.

Application Virtualization Sequencer

The Sequencer is a wizard-based tool; App-V administrators will come to use and appreciate more than any other function in the App-V world. The Sequencer is used to create App-V sequenced applications and produce an application "package." A package consists of several files, including

A sequenced application (.sft) file

Open Software Description (.osd) "link" files

Icon (.ico) files

A manifest xml file that can be used to distribute sequenced applications with electronic software delivery (ESD) systems

A project (.sprj) file

The Sequencer can also be used to build Windows Installer files (.msi) for deployment to clients configured for stand-alone operation. The .sft, .osd, and .ico files are stored in a shared content folder on the Management Server and are used by the App-V client to access and run sequenced applications.

Application Virtualization Client

The App-V Client is required on endpoint devices receiving applications from the App-V environment. It allows for management of package streaming on the client device; such as how much local cache is to be used by the application. It also manages how often the application checks in for any changes, and any user-specific configuration settings.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781597494311000126

Choosing the Right Solution for the Task

In Virtualization for Security, 2009

Application Virtualization

Some application virtualization solutions such as VMware's ThinApp offer the ability to stream the application to a user's desktop from a file server. By using this approach administrators can update a single file on a centralized file server so that the next time users start the application, they will get the latest version of that application. By encapsulating an entire application into a single file, the administrator also enables the user to run multiple versions of an application at the same time on the same desktop. Another application virtualization solution is the Microsoft App-V product.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781597493055000025

Application Management in Virtualized Systems

Rick Sturm , ... Julie Craig , in Application Performance Management (APM) in the Digital Enterprise, 2017

Application Virtualization

Application virtualization provides organizations with the best return on investment. Applications that are distributed across multiple personal computers present the single biggest problem for IT organizations. Preparing a complex application for deployment can take as much as 10   days; when there are hundreds of applications to manage, this can be a daunting task. As new versions of the application are released, the deployment process must be repeated ad nauseam. Instead of taking control of an application's Windows Installer-based installation, application virtualization grabs the running state and delivers it to each desktop. In essence, this means that applications no longer need to be installed because they can simply be copied. Any virtualized application will run on any version of Windows, so there is no need to revisit applications when they are moved or upgraded from one Windows version to another.

In addition to providing a single image that can be shared by many users, application virtualization also improves patch management of end-user applications and can eliminate the need for wasting hundreds, if not thousands, of hours upgrading systems.

A core concept of application virtualization is application streaming, or the ability to stream applications from a central point to multiple PCs. Similar to video or audio streaming, application streaming buffers content when an application is launched by a user and begins playing the selected content as soon as enough content is available. The rest of the application content is then delivered in the background. The beauty of application streaming is that the process is transparent to the end users.

Another concept that gained a lot of attention relative to virtualized applications is ThinApp. Developed by VMware, ThinApp encapsulates application files and registries into a single packet that can be easily deployed, managed, and updated independent of the underlying OS. With this approach, applications can be run on a physical PC, a shared network drive, a virtual desktop, or a USB stick without installing them on the local machine. This technique provides several features, including portability, reduced help desk costs, ease of updating, stronger endpoint security, and the ability to run multiple versions of the application on the same machine. This capability is now offered by a number of vendors and provides considerable cost savings, along with improved management and control.

All in all, virtualizing applications provides for greatly simplified application management by eliminating complex software deployment infrastructures and installation processes. It also provides an improved ability to customize individual user application delivery through a transparent user experience.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012804018800005X

Integrating application and presentation virtualization (Terminal Services)

Thomas Olzak , ... James Sabovik , in Microsoft Virtualization, 2010

Summary

RemoteApp takes application virtualization with Terminal Services one step closer to seamlessness from the user's point of view. Leveraging Active Directory, System Center Configuration Manager, or other distribution methods you can deploy your remote programs just like you do traditional ones. Coupling this technology with TS Web Access allows you to present your client applications via a Web page avoiding time-consuming and sometimes costly conventional deployments.

Just like with straight up Terminal Services you must remember that not all workstations can take full advantage of these new features. To take full advantage or RemoteApp with TS Web Access your clients need to be running RDP 6.1. If you are only running RDP 6.0 you will have to forgo TS Web Access.

As you deploy more RemoteApp programs and extend them to more users, be sure to keep an eye on the performance of your Terminal Server(s). Good resource monitoring and management, with a tool like Windows Resource Manager, and having a defined plan such as creating or expanding a Terminal Server farm will help to keep you out of the dog house. Make sure that you communicate your plan to management ahead of time, so that when you need to add more hardware the costs do not come as a surprise.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978159749431100014X

An Introduction to Virtualization

In Virtualization for Security, 2009

Application Virtualization

Administrators have always been plagued with the deployment and maintenance of desktop applications. Web applications and dynamically updated applications have been popular solutions to application distribution. Application virtualization seeks to tackle the problem by encapsulating a virtualization layer and all components of an application into a single file that can be run on a user's desktop. Application packages can be instantly activated or deactivated, reset to their default configuration, and thus mitigate the risk of interference with other applications as they run in their own computing space.

Some of the benefits of application virtualization are:

It eliminates application conflicts Applications are guaranteed to use the correct-version files and property file/Registry settings without any modification to the operating systems and without interfering with other applications.

It reduces roll-outs through instant provisioning Administrators can create pre-packaged applications that can be deployed quickly locally or remotely over the network, even across slow links. Virtual software applications can even be streamed to systems on-demand without invoking a setup or installation procedure.

It runs multiple versions of an application Multiple versions can run on the same operating system instance without any conflicts, improving the migration to newer versions of applications and speeding the testing and integration of new features into the environment.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781597493055000013

Information Security Essentials for Information Technology Managers

Albert Caballero , in Computer and Information Security Handbook (Third Edition), 2017

Public Cloud

There are many core ideas and characteristics behind the architecture of the public cloud, but possibly the most alluring is the ability to create the illusion of infinite capacity. Whether its one server or thousands, the performance appears to perform the same, with consistent service levels that are transparent to the end user. This is accomplished by abstracting the physical infrastructure through virtualization of the operating system so that applications and services are not locked into any particular device, location, or hardware. Cloud services are also on demand, which is to say that you only pay for what you use and should therefore drastically reduce the cost of computing for most organizations. Investing in hardware and software that is underutilized and depreciates quickly, is not as appealing as leasing a service, that with minimal upfront costs, an organization can deploy as an entire infrastructure.

Server, network, storage, and application virtualization are the core components that most cloud providers specialize in delivering. These different computing resources make up the bulk of the infrastructure in most organizations, so it is easy to see the attractiveness of the solution. In the cloud, provisioning these resources is fully automated and scale up and down quickly. To understand how each provider protects and configures each of the major architecture components of the cloud, it is critical for an organization to be able to assess and compare the risk involved in utilizing that provider or service. Make sure to request that the cloud provider furnish information regarding the reference architecture in each of the following areas of their infrastructure:

Compute: Physical servers, OS, CPU, memory, disk space, etc.

Network: VLANs, DMZ, segmentation, redundancy, connectivity, etc.

Storage: LUNs, ports, partitioning, redundancy, failover, etc.

Virtualization: Hypervisor, geolocation, management, authorization, etc.

Application: Multitenancy, isolation, load-balancing, authentication, etc.

An important aspect of pulling off this type of elastic and resilient architecture is commodity hardware. A cloud provider needs to be able to provision more physical servers, hard drives, memory, network interfaces, and just about any operating system or server application transparently and efficiently. To be able to do this, servers and storage need to be provisioned dynamically and they are constantly being reallocated to and from different customer environments with minimum regard for the underlying hardware. As long as the service level agreements for up time are met and the administrative overhead is minimized, the cloud provider does little to guarantee or disclose what the infrastructure looks like. It is incumbent upon the subscriber to ask and validate the design characteristics of every cloud provider they contract services from. There are many characteristics that define a cloud environment; please see Fig. 24.10 providing a comprehensive list of cloud design characteristics.

Figure 24.10. Characteristics of cloud computing.

Most of the key characteristics can be summarized in the list that follows [15].

On demand: The always-on nature of the cloud allows for organizations to perform self-service administration and maintenance, over the Internet, of their entire infrastructure without the need to interact with a third party.

Resource pooling: Cloud environments are usually configured as large pools of computing resources such as CPU, RAM, and storage from which a customer can choose to use or leave to be allocated to a different customer.

Measured service: The cloud brings tremendous cost savings to the end user due to its pay-as-you-go nature; therefore, it is critical for the provider to be able to measure the level of service and resources each customer utilizes.

Network connectivity: The ease with which users can connect to the cloud is one of the reasons why cloud adoption is so high. Organizations today have a mobile workforce, which require connectivity for multiple platforms.

Elasticity: A vital component of the cloud is that it must be able to scale up as customers demand it. A subscriber may spin up new resources seasonally or during a big campaign and bring them down when no longer needed. It is the degree to which a system can autonomously adapt capacity over time.

Resiliency: A cloud environment must always be available as most service agreements guarantee availability at the expense of the provider if the system goes down. The cloud is only as good as it is reliable so it is essential that the infrastructure be resilient and delivered with availability at its core.

Multitenancy: A multitenant environment refers to the idea that all tenants within a cloud should be properly segregated from each other. In many cases a single instance of software may serve many customers so for security and privacy reasons it is critical that the provider takes the time to build in secure multitenancy from the bottom up. A multitenant environment focuses on the separation of tenant data in such a way as to take every reasonable measure to prevent unauthorized access or leakage of resource between tenants.

The most significant cloud security challenges revolve around how and where the data is stored as well as whose responsibility is it to protect. In a more traditional IT infrastructure or private cloud environment the responsibility to protect the data and who owns it is clear. When a decision is made to migrate services and data to a public cloud environment certain things become unclear and difficult to prove and define. The most pressing challenges to assess are [3]:

Data residency: This refers to the physical geographic location where the data stored in the cloud resides. There are many industries that have regulations requiring organizations to maintain their customer or patient information within their country of origin. This is especially prevalent with government data and medical records. Many cloud providers have data centers in several countries and may migrate virtual machines or replicate data across disparate geographic regions causing cloud subscribers to fail compliance checks or even break the law without knowing it.

Regulatory compliance: Industries that are required to meet regulatory compliance such as HIPAA or security standards such as those in the PCI typically have a higher level of accountability and security requirements than those who do not. These organizations should take special care of what cloud services they decide to deploy and that the cloud provider can meet or exceed these compliance requirements. Many cloud providers today can provision part of their cloud environment with strict HIPAA or PCI standards enforced and monitored but only if you ask for it, and at an additional cost, of course.

Data privacy: Maintaining the privacy of users is of high concern for most organizations. Whether employees, customers, or patients, personally identifiable information is a high valued target. Many cloud subscribers do not realize that when they contract a provider to perform a service that they are also agreeing to allow that provider to gather and share metadata and usage information about their environment.

Data ownership: Many cloud services are contracted with stipulations stating that the cloud provider has permission to copy, reproduce, or retain all data stored on their infrastructure, in perpetuity—this is not what most subscribers believe is the case when they migrate their data to the cloud.

Data protection: This isn't always clear unless it is discussed before engaging the service. Many providers do have security monitoring available but in most cases it is turned off by default or costs significantly more for the same level of service. A subscriber should always validate that the provider can protect the company's data just as effectively, or even more so than the company itself.

If these core challenges with public cloud adoption are not properly evaluated then there are some potential security issues that could crop up. On the other hand, these issues can be avoided with proper preparation and due diligence. The type of data and operations in your unique cloud instance will determine the level of security required.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128038437000247

Virtualization and Windows 7

Jorge Orchilles , in Microsoft Windows 7 Administrator's Reference, 2010

Remote Desktop Services

RDS (formerly Terminal Services) is the most commonly used method of application virtualization. This method presents applications to connected users. The application actually runs in a session on the server in the data center while it appears to be running on the local desktop. This is a cost effective and reliable method of deploying applications to an enterprise. Figure 9.39 shows a simplified diagram of how RDS works.

FIGURE 9.39. Remote Desktop Services

Users, whether local or remote, all connect to the RDS server. The application is displayed to the end user while being executed on the RDS server. This gives equal performance to both local and remote users running the application. When the applications need to be upgraded or patched, they are patched only on the RDS servers. When the users next connect and run the application, they receive the updated version. The RDS server is capable of supporting multiple users on a single server, and there are many new enhancements in RDS with Windows Server 7 that allow for a variety of connection methods. Web Services, Session Broker, and Network Load Balancing all work together to provide a seamless application virtualization environment for most users.

If your users do not want to connect to a server or a Web page to run their applications, there is a new feature in Windows Server 2008 RDS called RemoteApp. A published application can be converted to a RemoteApp and generate a Windows Installer File (MSI) that can be deployed through Active Directory, file download, e-mail, or your SCCM environment to all the targeted users. When installed on your Windows 7 desktop, double-clicking on it will launch the application just like it is installed on the end-user desktop. The connection to the RDS server is automatically established and the application is started. The RemoteApp can add items to the desktop Start menu or desktop icon just like a locally installed application.

Using the advanced features of the Remote Desktop Client in Windows 7 allows for mapping of resources to the RDS server, so files and printers can be shared when a user connects. The advanced features also can authenticate a user before a user session is created to relieve the extra burden on the RDS server and allow for more connections and better performance. The drawback to this solution is the fact that a user must be able to connect to the RDS server in some fashion to be able to run an application.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781597495615000097

System Security

Derrick Rountree , in Security for Microsoft Windows System Administrators, 2011

Publisher Summary

This chapter provides information related to general system security threat, hardware and peripheral devices, OS and application security, virtualization, and system-based security applications. Some security threats are specific to the environment. But there are many threats out there that are dangerous in any environment. Any environment can be susceptible to viruses, Trojans, root kits, and privilege escalation. It is important to take the necessary steps to protect the environment from these threats. The systems are the main components of the environment. System security is crucial in any environment. System protection should entail many layers. This is because system vulnerabilities exist at many layers. There are hardware, operating system, application, and peripheral device threats. Each type of threat requires a different defense and a different method of remediation. These threats have been further intensified by the adoption of virtualization. One of the key concerns with virtualization is where security should be done. Windows 7 and Windows Server 2008 R2 include a number of applications that help in securing the systems and protect against various threats.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781597495943000041