All posts by client

HPE accelerates space exploration with first ever in-space commercial edge computing and artificial intelligence capabilities

IN THIS ARTICLE

  • Spaceborne Computer-2 will deliver 2X faster performance and targeted AI capabilities to tackle data processing needs in orbit and advance space exploration
  • HPE is increasing reliable computing on the ISS as a first step toward supporting NASA’s goals for future, deep space travel

Astronauts and researchers can process data at the edge and speed time-to-insight from months to minutes with launch of HPE’s Spaceborne Computer-2, an edge computing system for the International Space Station

Hewlett Packard Enterprise (HPE) today announced it is accelerating space exploration and increasing self-sufficiency for astronauts by enabling real-time data processing with advanced commercial edge computing in space for the first time. Astronauts and space explorers aboard the International Space Station (ISS) will speed time-to-insight from months to minutes on various experiments in space, from processing medical imaging and DNA sequencing to unlocking key insights from volumes of remote sensors and satellites, using HPE’s Spaceborne Computer-2 (SBC-2), an edge computing system. 

Spaceborne Computer-2 is scheduled to launch into orbit on the 15th Northrop Grumman Resupply Mission to Space Station (NG-15) on February 20 and will be available for use on the International Space Station for the next 2-3 years. The NG-15 spacecraft has been named “SS. Katherine Johnson” in honor of Katherine Johnson, a famed Black, female NASA mathematician who was critical to the early success of the space program.

Breaking Barriers to Achieve Reliable Computing in Space

The upcoming launch of Spaceborne Computer-2 builds on the proven success of its predecessor, Spaceborne Computer, a proof-of-concept that HPE developed and launched in partnership with NASA in 2017 to operate on the International Space Station (ISS) for a one-year mission. The goal was to test if affordable, commercial off-the-shelf servers used on earth, but equipped with purposefully-designed software-based hardening features, can withstand the shake, rattle and roll of a rocket launch to space, and once there, seamlessly operate on the ISS.

The proof-of-concept addressed the need for more reliable computing capabilities on the ISS, or low Earth orbit (LEO), that were previously impossible to achieve due to the ISS’s harsh environment of zero gravity and high levels of radiation that can damage IT equipment required to host computing technologies.

Additionally, gaining more reliable computing on the ISS is just the first step in NASA’s goals for supporting human space travel to the Moon, Mars and beyond where reliable communications is a mission critical need.

HPE successfully accomplished its one-year mission with Spaceborne Computer and is now set to launch, through a sponsorship from the ISS U.S. National Laboratory, an even more advanced system, called Spaceborne Computer-2, which is set to launch this month and be installed on the ISS for the next 2-3 years for wider use. 

Accelerating Space Exploration with State-of-the-Art Edge Computing and AI Capabilities

Spaceborne Computer-2 will offer twice as much compute speed with purpose-built edge computing capabilities powered by the HPE Edgeline Converged Edge system and HPE ProLiant server to ingest and process data from a range of devices, including satellites and cameras, and process in real-time.

Spaceborne Computer-2 will also come equipped with graphic processing units (GPUs) to efficiently process image-intensive data requiring higher image resolution such as shots of polar ice caps on earth or medical x-rays. The GPU capabilities will also support specific projects using AI and machine learning techniques.

The combined advancements of Spaceborne Computer-2 will enable astronauts to eliminate longer latency and wait times associated with sending data to-and-from earth to tackle research and gain insights immediately for a range of projects, including:

 

  • Real-time monitoring of astronauts’ physiological conditions by processing X-Ray, sonograms and other medical data to speed time to diagnosis in-space.

  • Making sense of volumes of remote sensor data: There are hundreds of sensors that NASA and other organizations have strategically placed on the ISS and on satellites, which collect massive volumes of data that require a significant amount of bandwidth to send to earth to process. With in-space edge computing, researchers can process on-board image, signal and other data related to a range of events, such as:

o   Traffic trends by having a wider look at number of cars on the road and even in parking lots

o   Air quality by measuring level of emissions and other pollutants in the atmosphere

o   Tracking objects moving in space and in the atmosphere from planes to missile launches  

 

 “The most important benefit to delivering reliable in-space computing with Spaceborne Computer-2 is making real-time insights a reality. Space explorers can now transform how they conduct research based on readily available data and improve decision-making,” said Dr. Mark Fernandez, solution architect, Converged Edge Systems at HPE, and principal investigator for Spaceborne Computer-2. “We are honored to make edge computing in space possible and through our longstanding partnerships with NASA and the International Space Station U.S. National Laboratory, we are look forward to powering new, exciting research opportunities to make breakthrough discoveries for humanity.” 

Proven in Space, Available on Earth: HPE Addresses the Harshest, Outer Edge Environments with Enterprise-Grade Solutions

HPE is delivering the same edge computing technologies targeted for harsh, remote environments on earth such as oil and gas refineries, manufacturing plants or on defense missions, to space. Spaceborne Computer-2 includes the HPE Edgeline Converged EL4000 Edge System, a rugged and compact system designed to perform in harsher edge environments with higher shock, vibration and temperature levels and purpose-built to process computing power at the edge to collect and analyze volumes of data from remotely scattered devices and sensors in space.

As a result of HPE’s proven success in delivering its computing technologies to space, organizations such as OrtbitsEdge, which provides protective hardening features for space computing initiatives, plans to integrate the HPE Edgeline Converged Edge Systems with its hardening solution, SatFrame, to enable commercial space companies to deploy computing in orbiting satellites and accelerate exploration.

Coupled with the HPE Edgeline Converged Edge Systems, Spaceborne Computer-2 will also feature the HPE ProLiant DL360 server, an industry-standard server, for additional high-performing capabilities to target a range of workloads, including edge, HPC, AI, etc.

“Edge computing provides core capabilities for unique sites that have limited or no connectivity, giving them the power to process and analyze data locally and make critical decisions quickly. With HPE Edgeline, we deliver solutions that are purposely engineered for harsh environments. Here on Earth, that means efficiently processing data insights from a range of devices – from security surveillance cameras in airports and stadiums, to robotics and automation features in manufacturing plants,” said Shelly Anello, General Manager, Converged Edge Systems at HPE. “As we embark on our next mission in edge computing, we stand ready to power the harshest, most unique edge experience of them all: outer space. We are thrilled to be invited by NASA and the International Space Station to support this ongoing mission, pushing our boundaries in space and unlocking a new era of insight.”

Tackling Bigger Research with Edge-to-Cloud Capabilities

Through a collaboration with Microsoft Azure Space, researchers around the world running experiments on Spaceborne Computer-2 have the opportunity to burst to the Azure cloud for computationally intense processing needs that require that can also seamlessly transmit results back to SBC-2. Examples being considered by Microsoft Research include:

 

  • Modeling and forecasting dust storms on earth to improve future predictions on Mars that can cover the entire red planet and decrease output of solar power generation that is critical to enabling mission essential energy needs

  • Assessing liquid usage and environmental parameters involved in growing plants in space to support food and life sciences by collecting data from hydroponics processes and comparing them with large data sets on Earth

  • Analyzing lightning strike patterns that trigger wildfires by processing a vast amount of data collected from 4K video-streaming cameras that capture lightning strikes happening across earth

  • Advanced analysis of medical imaging using ultrasound on the ISS to support astronaut healthcare

 

Call for Submission: Spaceborne Computer-2 Open for Research

Submissions for research considerations on Spaceborne Computer-2 are open now. To learn more on how to submit a proposal to leverage the system to run experiments, please visit www.hpe.com/info/spaceborne

About Hewlett Packard Enterprise

Hewlett Packard Enterprise is the global edge-to-cloud platform as-a-service company that helps organizations accelerate outcomes by unlocking value from all of their data, everywhere. Built on decades of reimagining the future and innovating to advance the way we live and work, HPE delivers unique, open and intelligent technology solutions, with a consistent experience across all clouds and edges, to help customers develop new business models, engage in new ways, and increase operational performance. For more information, visit: www.hpe.com

You are invited to HPE’s Virtual Composable Test Drive Event

You are invited to HPE’s Virtual Composable Test Drive Event.

Hewlett Packard Enterprise would like to invite you to our Test Drive Composable event. Equip yourself with the core capabilities to manage, configure, and provision HPE Synergy. This one-day workshop is for IT administrators or technical architects and will consist of 30% lectures and 70% hands-on lab exercises.

In this virtual hands-on lab you will walk away with:

  • A high-level understanding of the associated managed equipment: HPE Synergy Frame, Network Fabric, Compute and Storage modules.
  • Experience using the HPE Synergy Composer (featuring HPE OneView) to provision and manage the HPE Synergy environment.
  • Exposure to HPE OneView PowerShell scripts which will be used throughout each of the lab exercises.

Date : 2 x events per month

Time : Time: 09:30 – 15:30 GMT

Register here

There are limited seats to each scheduled event, so please register now and we look forward to you joining us!

 

HPE jumps to #2 in Hyperconverged Infrastructure Systems

HPE jumps to #2 in Hyperconverged Infrastructure Systems

IN THIS ARTICLE

  • HPE growth in HCI is built upon a proven strategy that’s squarely focused on providing customers what they need
  • HPE has an HCI portfolio that delivers industry-leading innovation along with partner solutions to deliver the right HCI for every use case

Grows 25X faster than the overall market, 16.3% year-over-year

We are excited to share that HPE continues to outperform the industry in HCI as shown in IDC’s 3Q2020 WW Quarterly Converged Systems Tracker. As organizations strive to increase IT agility, HPE radically simplifies operations with our industry-leading technology and strategic partnerships. And now, in a quarter in which the industry remained flat and Dell Technologies saw revenue fall, HPE grew over 16% YoY to secure the #2 position in HCI systems.

Our growth in HCI is built upon a strategy that’s squarely focused on providing our customers what they need.  HCI is a fast-evolving market category – a category that once was limited to small businesses and general-purpose applications but now can support a wide-range of applications from the edge-to-cloud.

With a broad application mix, it’s important to recognize that no one size fits all – there’s no magical HCI unicorn that can optimally address the needs of every workload and every business.  And that simply comes down to the fact that different solutions are optimized for different requirements.  Are you looking to power enterprise edge environments across hundreds of remote locations? Looking to save on hypervisor licensing costs through an open-source stack? Or want to extend your existing VMware licenses? Each question is optimally addressed by a different solution – and that’s why we provide our customers a best of breed approach that delivers industry-leading innovation along with partner solutions to deliver the right HCI for every use case.
But how do you know what’s the best fit for your use cases and requirements?  Here’s a quick guide to know the when and why behind each of HPE’s HCI appliances.

HPE SimpliVity

HPE SimpliVity is an industry-leading, software-defined HCI solution for virtualized workloads and edge environments. It is the industry’s most efficient all-in-one appliance with high-availability achieved in the fewest nodes in its class and built-in data protection with multi-site rapid disaster recovery that eliminates the need for separate backup software and hardware. HPE SimpliVity has led the market in innovation – from the early days in 2009 to now – delivering a unique experience that simplifies how infrastructure is deployed, managed, and upgraded.

And looking back over the past 12 months, the experience-enhancing innovation from HPE SimpliVity has been non-stop. We added support for HPE InfoSight, to leverage the industry’s most advanced AI for infrastructure, and HPE GreenLake to maximize agility by delivering virtual machines as a service.  Then, we announced the HPE SimpliVity 325 Gen 10 with 2nd Gen AMD EPYC to set a record-breaking benchmark for VDI density for remote workers. And most recently, we added support for cloud-native data protection with HPE Cloud Volumes Backup.

In recognition of our innovation and commitment to raising the bar, HPE SimpliVity was recently declared the technology winner in the 2020 CRN Product of Year for Hyperconverged Infrastructure, an esteemed award that demonstrates how HPE SimpliVity stands out from the crowd.

Dave Wundereley at Pitt Ohio summarizes it perfectly,  “HPE SimpliVity turned out to be a game-changer in terms of human resource management. We no longer need staff with specific expertise to manage and maintain our technology; the simplicity of HPE SimpliVity makes it much easier to find and retain talent that can handle everything.”

HPE ProLiant DX with Nutanix

One key consideration that organizations need to make when deciding on HCI is what hypervisor software stack they want to run for their virtualized workloads.  The reason it’s important is that this decision drives the overall VM experience for management, orchestration, and automation.  While many enterprises have standardized on VMware ESXi and the related vSphere toolset, many others are looking to open-source alternatives to drive down cost and complexity.

HPE ProLiant DX delivers a turn-key appliance for Nutanix Acropolis and AHV, a license-free hypervisor delivering open platform virtualization and application mobility.  This joint solution, powered with HPE GreenLake for an as a service experience, gives organizations both the operational agility of HCI, and also the financial agility and lowered software costs that many organizations are looking for.   

HPE vSAN ReadyNodes

And last, but not least, are the HPE vSAN Ready Nodes – a joint partnership and validated solution between HPE and VMware.  For organizations that have standardized on vSphere and want to deploy vSAN storage, HPE provides an easy on-ramp to HCI and VMware Cloud Foundation in partnership with VMware.

This joint offering delivers a factory turn-key solution with VMware vSphere, vSAN, and vCenter on various HPE ProLiant DL configurations based on workload requirements.  And most importantly, our joint customers can enjoy a simple, single source of support through HPE PointNext for all Level 1 and 2 requests.

A proven strategy that delivers what you need

We’re excited and proud to be recognized for our tremendous growth and execution in HCI. And we thank our customers for entrusting us with their critical infrastructure and applications.  HCI is a fast-growing and fast-evolving market – and with so much innovation on our roadmap ahead we look forward to continuing to lead the market and help deliver the agility our customers need to transform their businesses.

AI and sustainability: The most important tech challenges of 2021

AI and sustainability: The most important tech challenges of 2021

In this episode of Tech Talk, experts lay out some of the top technology trends they see in the coming year and beyond.

In 2020, tech showed its value in ways no one had foreseen. In 2021, it will not only help us rebuild but move forward—also in significant ways.

In this Tech Talk podcast, Andrew Wheeler, HPE Fellow, vice president and director of Hewlett Packard Labs, and host Robert Christiansen, vice president, strategy, Office of the CTO at HPE, highlight some of the top trends in the coming year. From energy and capability gaps to trustworthy and ethical AI, here are key areas IT leaders will be focused on.

Energy gap

At the top of the list, Wheeler says, is “the sustainability problem,” also known as the energy gap, with IT infrastructure now consuming as much as 20 percent of the global energy production in the world.

It’s a “core global challenge,” he says, noting the digital universe is on track to grow by a zettabyte of data every day. How much is that? Well, you’d need 1 billion terabyte hard drives to store 1 zettabyte of data. “This is the expansion of the digital universe we’re talking about,” Wheeler says.

Add to that new technologies such as 5G and the connectivity and bandwidth it’s going to require, along with “all of the autonomous things that are going to be out there, the fact that we’re going to be embedding intelligence everywhere, and that problem is only going to get worse,” he says.

Please read: The edge is on your mango: Energy harvesting and IoT

Given the accelerating rate at which IT infrastructure is expanding, Wheeler predicts that IT leaders in 2021 and beyond will prioritize investments and breakthroughs that help their organizations do more with less energy consumption and environmental impact.

Capability gap

Another priority will be to address the “capability gap,” he says, which applies to the core infrastructure hosting all of these workloads.

“Right now, our ambitions are going a little bit faster than what our core infrastructure or computers can improve upon,” Wheeler notes, pointing to high-performance computing and AI workloads as examples.

Please read: HPC as a service: High-performance computing when you need it

This is where different types of accelerators and heterogeneous architecture—special-purpose computing engines that boost the speed and energy efficiency of critical workloads—are being used to improve overall performance, he explains.

And over time, Wheeler says, “the holy grail of that trend is going to be more and more dynamic,” to better support the composability of these different systems.

HPE builds sustainable technology solutions by considering the whole lifecycle of its products.

Trustworthy AI

Other areas IT leaders will focus on are related to AI as well: Is AI trustworthy, and is it being used as a force for good?

Those questions encompass a range of concerns, from the techniques used to arrive at results to the privacy, security, and robustness of AI data platforms.

Please listen: Digital ethics: the good and bad of tech

In terms of trustworthiness, Wheeler says the goal will be to take “humans out of the dashboard” and use AI to optimize data center resiliency, energy efficiency, and end-to-end operations.

And from an ethical perspective, Wheeler and Christiansen say eliminating bias in AI and improving “the way people live and work” are key.

“[It is] absolutely core to the culture that our CEO, Antonio Neri, is cultivating,” Wheeler says.

Data deluge: How to get data management strategy in sync

Vast amounts of data will drive new insights and better business decisions, but only if you have a comprehensive data management plan in place.

We are living in the golden age of data. Thanks to phones, cloud apps, and billions of IoT devices, the volume of enterprise data is growing by more than 60 percent per year, according to IDC.

The idea that data is the new oil has become a cliche. It is the fuel that will power the AI revolution. By applying analytics to vast pools of data, businesses will be able to glean new insights and make better decisions.

But unlike petrochemicals, data can vary wildly from one source to the next. Each enterprise has its own methods of extracting, refining, and applying it. And many organizations still lack the skills to turn their raw data into something useful.

A 2019 study by Experian found that nearly a third of enterprises believe their data is inaccurate. Some 70 percent say they lack direct control over strategic data such as the quality of their customer experience, while 95 percent say poor data quality is hurting their business’s bottom line.

Enterprises need to get their data houses in order now, before the deluge. Here are five steps they can follow.

Don’t be a data hoarder

Most companies already have more data than they know what to do with. They’ve been collecting it for years without a coherent plan.

“Many organizations just collect everything under the assumption they’re going to do something smart with this data in the future,” says Glyn Bowden, chief architect for AI and data science at Hewlett Packard Enterprise. “But when you start creating large pools of data with no indication of where it came from or why it was collected, then it’s open to interpretations that could be wildly wrong.”

Storage and compute may seem infinite in the cloud, but it’s not free to process and analyze data.

MIKE LEONE SENIOR ANALYST, ENTERPRISE STRATEGY GROUP

And as the volume of data grows at an exponential rate, enterprises are going to face some difficult economic choices, says Mike Leone, senior analyst at Enterprise Strategy Group (ESG).

“Storage and compute may seem infinite in the cloud, but it’s not free to process and analyze data,” he says. “Long term, many organizations won’t be able to afford to do what they want to do. Unless they figure out a way to commoditize the consumption of data, combined with ultra-efficient resource utilization, they’re going to hit a breaking point.”

Please read: Is your approach to data protection more expensive than useful?

Worse, clinging to irrelevant data can also drive the enterprise in the wrong direction, Bowden warns. Organizations may end up changing their core business to match the data, rather than using the data to drive their core business.

“You have to answer the question, ‘Why am I keeping this data?’ Once you know why you’re capturing your data and how you intend to use it, things become more clear,” he says. “You should always start with a business outcome that aligns with your current objectives, not try to pivot just because you’ve got access to new data.”

Dismantle your data silos

Once you’ve identified the data that can drive business outcomes, the next step is to figure out where it resides, how it enters and leaves the organization, and who’s responsible for managing it, Bowden says.

“You need a good understanding of what your data ecosystem looks like and what the challenges are,” he says. “If you’re creating data silos, you need to figure out why. Is it because the data is stuck inside an SQL database that you can’t easily share? Have you created a data lake but nobody else in the organization knows it’s there? Mapping what you have and how you’re using it today is a good place to start.”

Please read: A data fabric enables a comprehensive data strategy

Sometimes silos emerge because the data is owned by a specific business unit that may be reluctant to relinquish control over it, notes David Raab, founder of the Customer Data Platform Institute.

“It’s usually a little more subtle than somebody standing there with their arms crossed saying, ‘I’m not going to share my data with you,'” he adds. “It’s more like, ‘If there’s no benefit to me or my group, then somebody else needs to pay for it.'”

Silos can lead to costly data duplication and prevent the organization as a whole from taking full advantage of that data. Often, they’re created because business unit leaders aren’t thinking hard enough about the big picture, says Anil Gadre, vice president of the Ezmeral go-to-market team at HPE.

“Historically, people have had this notion of ‘I have a dataset and I’m going to do this one thing with it, and I have this other dataset and I’m going to do something else with that,'” Gadre says. “But we are increasingly seeing our customers create larger datasets that are used for multiple purposes.”

For example, Gadre notes, one of the largest insurance companies in the world relies on a massive data lake that feeds 52 different business units, each with dozens of use cases.

“So you might have 500 different applications tapping into this common set of data, because they’re being used in very different ways by those business units,” he says.

In cases where enterprises must comply with data sovereignty regulations, data silos may be unavoidable. But in most situations, it’s better to break them down, Gadre says.

Foster a data-centric culture

Managing data at this scale requires a top-down data governance strategy, says ESG’s Leone.

“Data is growing at an alarming rate, and a majority of it is not analyzed,” he says. “It’s difficult for organizations to ensure trust in data if it’s not properly integrated, cataloged, qualified, and made available to the right tools or—more importantly—the right people. Over the next year, data governance is going to be massively important, especially as organizations look to leverage more high-quality data.”

It’s why many organizations are hiring chief data officers who can reach across different business units to coordinate a unified strategy, Leone adds. But you also need to build a team with the right kinds of skills.

Gadre says organizations are showing an increasing interest in DataOps, creating teams of specialists that can manage the logistics involved in storing, sharing, and securing enormous datasets.

Modernize backup and recovery and get value from your data – all delivered with the agility of the cloud, without the data egress costs or lock-in. HPE GreenLake.

“Think about the supply chain logistics of getting the COVID-19 vaccine rolled out around the country and getting it administered,” Gadre says. “The data logistics problem is not much different. You have to get the data from here to there. You have to get it to the right people who can use it in a timely way. Some data has a very short shelf life and loses value quickly, while other data doesn’t. How do you store the data? How do you recover from failures? It’s a layer cake of the many different things you have to do.”

Prep your data for analytics

The primary reason most enterprises collect massive amounts of data is so they can apply AI to it and make smarter business decisions. But the value of the insights that analytics can provide are only as good as the quality of the data fed to the machine learning models. You know the saying: Garbage in, garbage out.

“The biggest driver of successful AI scaling within any organization is having access to well-organized and relevant data,” says James Hodson, CEO of the AI for Good Foundation, a nonprofit organization focused on the use of AI to address societal needs. “Companies that are better at collecting, storing, and analyzing data stand to gain a lot more from the principled introduction of AI into their processes than those that are still figuring out what data gives them an advantage and how to collect it.”

But many companies underestimate the time and effort required to build an effective data infrastructure and hire the right people to manage it, and they may need to spend years collecting data before it becomes truly useful, Hodson says.

Identify the right use cases

Organizations also need to identify what data sources are the most useful for analytics purposes and the proper use cases to apply them to.

“On its own, data is about as useful as oil when you don’t have an engine to put it in,” says Bowden. “Data is only useful in the context of a particular problem you’re trying to solve or a particular inference you’re trying to get to—something that’s actually going to drive business value.”

Some business processes are more conducive to machine learning than others, notes Anastassia Fedyk, assistant professor of finance at the Haas School of Business at the University of California, Berkeley. For example, well-defined prediction problems, such as anticipating potential failures in a piece of industrial equipment, are good candidates for machine learning.

Problems where factors outside your control can influence results—say, trying to predict sales after a competitor has introduced a new product in the market—won’t be as accurate, she notes.

“When enterprises ask me where to apply analytics, I ask them, ‘What is the biggest thing preventing you from growing revenue or solving your cost problems?'” says Gadre. “What is the number one thing you’d love to be better at?'”

Most organizations end up with five or 10 key business objectives they’d like to drive with analytics. Gadre advises them to assign a team to each objective and give them a few weeks to see if anything interesting arises from the data.

“It’s the classic idea of fail fast,” he adds. “Try it. If it doesn’t work, celebrate what you learned and move onto the next thing. Even if it was a failure, finding out that this dataset was not useful is well worth the money.”

Introducing HPE FlexOffers – GET THE BEST DISCOUNTS ON TAILORED CONFIGURATIONS

Introducing HPE FlexOffers!

HPE FlexOffers provide tailored configurations at competitive prices, allowing you to deliver the best value to your SMB customers.
Through iQuote’s latest enhancements, build your own discounted HPE bundles and make deals more attractive to your customers, with simplified and automated processes within the tool to save you time and money.

DID YOU KNOW?
Westcoast was the first distributor to implement iQuote!

Log in to iQuote today and look out for the FlexOffers logo.

BENEFITS OF SMB 2.0

The system enhancements allow HPE and its partners to focus on driving more sales while speeding up transactional claims processing. The benefits include:

  • Flexible BTO at everyday low price
  • Discounting designed to incentivize attach
  • Reduce escalated pricing
  • Automated promotions set-up and claiming – True no-touch sales process.

 

We are excited to announce the HPE Get More Tour – SMB Vertical Series

Join us for a fun and dynamic experience – and a great way to learn how to identify new opportunities within the growing and evolving SMB market. You will learn about the digital transformation challenges that are typical for each vertical and their specific digital business processes and workflows. During the workshop, we will work through customer scenarios enabling you to practice your skills and develop your knowledge with an appropriate HPE Solution with Microsoft.

REGISTER 

During these 60 min workshops you will use the learning platform to access materials and exercises – all supported by expert trainer presentations, group discussions and quizzes.

Verticals covered:

  • Financial Services Solutions for SMB

Dates:

  • February 17 2021
  • February 23 2021

 

This is so much more than just another webinar, so join up today and build your SMB Vertical expertise!

REGISTER