Tag Archives: Strategy

Top 5 considerations for building a successful VDI strategy


Identifying the use cases for your business is critical, as it lays the groundwork for the other four considerations.

You must first analyse your business drivers. The question you need to answer is, “What are the key elements that drive my business profits?” These could be tangible assets—salespeople, the number of stores, web ads, manufacturing, managers, applications, designers, and developers. They could be technologies—artificial intelligence (AI), cybersecurity, automation, data analytics, or the Internet of Things (IoT). They could even be relationships, maintaining the trust and confidence of existing customers and business partners.

From the business drivers, create a list of the number and types of users that support each of those drivers. Ideally, you want to allocate users into the three traditional user types (task workers, knowledge workers, and power users), which can be challenging because users are rarely created equal. Use the list of user types below to help you discern.

Your use cases connect your key business drivers to the type of infrastructure required to support them. It is common to find a number of different use cases across an organization, based on the number and the type of users. Now that you have a rough idea of the number and types of users you support, build your use cases by revisiting that list with a focus on the specific technology needs behind each use case, such as virtual desktop, virtual
application, GPU accelerated, persistent, or non-persistent. From this you can characterize the expected workload compute and storage relative to the end-user requirements.
Be aware that use cases, provisioning models, and virtual machine (VM) profiles all require different resources from a compute node and storage perspective. Understanding these requirements lays the groundwork and sets expectations for the overall scope, utilization, and performance. Some users may require persistence. Others may not. Others still may benefit from application virtualization, GPU, or a combination. Determining use cases and characterizing the workload are critical for understanding the hardware/software stack necessary to support it.


With your business drivers, use cases, and provisioning models established, the workload will begin to take shape. Virtual workloads are defined by VM profiles which encompass vCPU, RAM, the application stack, image size, persistent versus non-persistent, and/or vGPU requirements. An accurate VM profile is the basis for all node density
projections and is critical for establishing node equilibrium with respect to hardware requirements, utilization, and performance.

Your VM profile can be determined in two ways.
1. Do a full physical, virtual desktop, and application assessment.

This process usually requires a consulting engagement and trended data collection using workload assessment tools. This is a great path if you have the time.

The process will:
–Assess physical and virtual requirements                                                                                                      –Define use cases
–Determine feasibility
–Set density expectations

2. Run a proof of concept (POC). Develop a model based upon real-world users with specific user images and use cases relative to the defined VM profile expectations.

This will:
–Define test/success criteria
–Test real-world images across subsets of users specific to individual use cases across the organization


Understanding scope and scale in conjunction with your use cases and workload requirements establishes the ground rules for platform selection. Some platforms and architectures perform and scale better than others relative to workload or use case, which impacts hardware selection.

Scope considerations
The scope determines how the workload will run across use cases and how many users the solution will be required to scale to. For instance,
the economics begin to shift in favour of more traditional compute/storage architectures at roughly 2000 users and above, so the number of concurrent users dictate platform selection.
The factors that play into scope include:
• Single or multiple organizations
• Initial footprint versus end footprint
• Global or regional users
• Centralized or decentralized (distributed) resources

Scale considerations
Scale goes hand-in-hand with scope because you need a solid understanding of
how the deployment will roll out. Will it be a single site, multiple sites, or global? This
determines factors such as WAN versus LAN access, and local compute resources
versus regional compute resources.
Beyond the initial deployment, it’s also essential to look ahead. Try to define what the
solution is going to look like 6–18 months from now. It’s important not to paint your
solution into a corner because the platform doesn’t scale.
The factors that affect scale include:
• Initial and end footprint
• How will scaling occur?
–Phased user
–Phased app
–Phased site


Your platform is dictated by the use case, workload, and scope information. Once these variables are firmly understood, the right platform choice tends to present itself.
Many VDI platforms are available to match with specific use case, workload, scope, and deployment models. If you have already decided upon a form factor—traditional rack-based or blade-based servers, converged infrastructure, HCI, disaggregated HCI, or hosted desktop infrastructure (HDI)—then you have already narrowed the options down to a good starting point.
If you have specific use cases where resource contention is a concern, consider an HDI solution. HDI can provide a physical 1:1 resource for cases requiring high levels of system responsiveness, such as financial services, traders, light/medium engineering applications, and design and content creation.
For data-intensive workloads like AI or deep learning, consider GPU-accelerated solutions such as those offered by NVIDIA®. Be aware of any
centralized storage requirements.


The final step is determining which VDI deployment model will ultimately work best for your business, organization, and budget. There are three deployment models to consider.

Private cloud
Hardware and software are located on-premises. This is a traditional model that can be CAPEX or OPEX based. It is of particular interest for highly regulated industries like government, healthcare, financial, or any workloads that require extra emphasis on governance, security, or intellectual property concerns.

Public or managed cloud
Public or managed cloud models provide desktop as a service (DaaS) through a subscription model or a pay-as-you-go model based on consumption of resources. It is suitable when you have predictable computing needs—such as communication services for a specific number of users—but also offers flexibility to scale instantly in response to varying peak demands.

Hybrid cloud model
In the hybrid cloud model, deployment spans private, public, or managed cloud environments so you can get the benefits of both public and private clouds and still take advantage of the existing architecture in a data centre. The inherent flexibility of the hybrid cloud model makes it a good choice for mixed workloads. You can allocate dynamic, frequently changing workloads to public cloud for easy scalability and the more predictable or sensitive workloads to private cloud or on-premises data centre.

See the full PDF outlining the above here > View Top 5 considerations for building a successful VDI strategy reference guide | HPE