HPE Edge-to-Cloud Solutions Questions and Answers
When should a customer be offered an HPE GreenLake Cloud Service trial?
Options:
When the customer wants to evaluate a high-performance, large-scale power-user VDI cluster in the public cloud.
When the customer wants to evaluate 24/7 support for their mission critical workloads residing in a collocated datacenter.
When the customer wants to experience Platform as a Service (PaaS) that is inclusive of technology. Support, metering and management.
When the customer wants to experience next generation technology not yet released for public consumption.
Answer:
CExplanation:
HPE GreenLake Cloud Services are designed to provide customers with a cloud-like experience for their on-premises and edge workloads, with flexible and predictable pay-per-use pricing, self-service provisioning, and unified management. HPE GreenLake Cloud Services offer a range of solutions for different use cases, such as infrastructure, containers, SAP HANA, VDI, private cloud, and HPC. HPE GreenLake Cloud Services also include technology, support, metering, and management as part of the platform as a service offering, which simplifies the customer’s IT operations and reduces their total cost of ownership. HPE GreenLake Cloud Services Try & Buy Offer is a program that allows customers to try HPE GreenLake Cloud Services for free for 30 days, and evaluate how they can benefit from the cloud economics, agility, and scalability of HPE GreenLake. Customers can choose from various cloud services and test them in their own data center or colocation facility, with no upfront investment or commitment. Customers can also access HPE experts and resources to help them with the trial and the transition to HPE GreenLake. Therefore, a customer should be offered an HPE GreenLake Cloud Service trial whenthey want to experience platform as a service that is inclusive of technology, support, metering and management, and see how it can transform their IT and business outcomes. References:
You are generating a customer HPE GreenLake proposal for a customer.
Select the items that are mandatory when submitting the initial proposal to HPE for quoting.
(Choose two.)
Options:
Start Bill of Materials
End Bill of Materials
Signed statement of Work
Completed Order Checklist
Credit Check Form
Answer:
A, DExplanation:
When generating a customer HPE GreenLake proposal, you need to submit the following mandatory items to HPE for quoting:
- Start Bill of Materials: This is a document that lists the initial hardware and software components, quantities, and prices that are required for the HPE GreenLake solution. It also includes the service level, the billing unit, the minimum and maximum capacity, and the buffer size. The Start Bill of Materials helps HPE to calculate the monthly fee and the buffer charge for the customer.
- Completed Order Checklist: This is a document that contains the essential information and documents that are needed to process the HPE GreenLake order. It includes the customer name, address, contact details, legal entity, billing frequency, payment method, contract term, start date, end date, and signature. It also includes the attachments such as the Start Bill of Materials, the End Bill of Materials, the Statement of Work, the Credit Check Form, and the Customer Acceptance Form.
The other items are not mandatory for the initial proposal, but they may be required later in the order process:
- End Bill of Materials: This is a document that lists the final hardware and software components, quantities, and prices that are delivered and installed for the HPE GreenLake solution. It may differ from the Start Bill of Materials due to changes in the customer requirements, availability, or pricing. The End Bill of Materials helps HPE to reconcile the actual usage and billing with the customer.
- Signed Statement of Work: This is a document that defines the scope, deliverables, responsibilities, and terms and conditions of the HPE GreenLake service. It also includes the service level agreement, the service description, the service activation, the service management, the service reporting, and the service termination. The Statement of Work must be signed by both HPE and the customer before the service can start.
- Credit Check Form: This is a document that authorizes HPE to perform a credit check on the customer to assess their financial stability and creditworthiness. The credit check helps HPE to determine the payment terms and conditions for the HPE GreenLake service.
References: HPE GreenLake Central User Guide, HPE GreenLake for Block Storage MP, HPE GreenLake Edge-to-Cloud Platform User Guide
Your customer has asked you to design a new platform to support their existing VMware cluster. The current environment runs their business applications along with several customer facing applications that are critical to the business. The current platform is two aged C7000 blade chassis with 16 blades in total connected via Fibre Channel to an HPE 3PAR storage array with 230TB of usable capacity. They are using Micro Focus Data Protector and tape for backup.
They are looking to upgrade the environment and improve their recovery times while reducing the management overhead.
Which server, storage, and data protection strategy meet all the customer requirements?
Options:
Answer:
Explanation:
Based on the customer’s requirements, the following strategy would meet their needs:
Server: Synergy 480 Gen10 Plus Storage: Alletra 6000 Data Protection: StoreOnce with Veeam Data Protection backup: full/synthetic full local backup rotation
Server: Synergy 480 Gen10 Plus
The Synergy 480 Gen10 Plus is a composable, scalable, and flexible server that can support VMware clusters with high performance, availability, and efficiency. It offers the following benefits for the customer:
- Composable: The Synergy 480 Gen10 Plus can be dynamically configured and reconfigured using software-defined templates and profiles, allowing the customer to optimize their resources for different workloads and applications. The customer can also leverage HPE OneView and HPE Composer to automate and orchestrate their infrastructure management, reducing the complexity and overhead.
- Scalable: The Synergy 480 Gen10 Plus can support up to two Intel Xeon Scalable processors, up to 3 TB of memory, and up to 24 SFF drives or 12 LFF drives per node. It can also be expanded with up to six mezzanine options, including Fibre Channel, Ethernet, and InfiniBand adapters. The customer can scale their VMware cluster horizontally or vertically as their needs grow, without compromising on performance or efficiency.
- Flexible: The Synergy 480 Gen10 Plus can support various operating systems, hypervisors, and applications, including VMware vSphere, VMware vSAN, and VMware Cloud Foundation. It can also integrate with HPE GreenLake, HPE’s edge-to-cloud platform, to provide the customer with a pay-per-use, as-a-service model that can lower their costs and risks.
Storage: Alletra 6000
The Alletra 6000 is a mid-range storage solution that offers flexible performance, scalability, and resiliency for business-critical workloads. It is suitable for the customer’s VMware cluster because:
- Flexible performance: The Alletra 6000 can deliver up to 900K IOPS with sub-300 microseconds latency, supporting the customer’s business and customer-facing applications with consistent and reliable performance. It can also support NVMe and SAS technologies, and offer three performance tiers: Performance, Business Critical, and Mission Critical, allowing the customer to choose the best option for their workloads and service level objectives.
- Scalability: The Alletra 6000 can support up to 16 PB of raw capacity, and up to 64 hosts per system. It can also scale out with HPE Cloud Volumes, HPE’s cloud-native storage service, to provide the customer with hybrid cloud capabilities and flexibility. The customer can scale their storage capacity and performance as their VMware cluster grows, without compromising on availability or efficiency.
- Resiliency: The Alletra 6000 offers a 100% data availability guarantee, ensuring that the customer’s data is always accessible and protected. It also supports various data protection features, such as snapshots, replication, encryption, and erasure coding, to enhance the customer’s data security and recovery. The customer can also leverage HPE InfoSight, HPE’s AI-driven predictive analytics platform, to monitor and optimize their storage performance, health, and utilization, and to prevent issues before they impact their operations.
Data Protection: StoreOnce with Veeam
The StoreOnce with Veeam is a data protection solution that combines HPE’s deduplication appliance and Veeam’s backup and recovery software to provide the customer with fast, efficient, and reliable backup and recovery for their VMware cluster. It offers the following benefits for the customer:
- Fast backup and recovery: The StoreOnce with Veeam can reduce the backup window and the recovery time objective (RTO) for the customer’s VMware cluster, by leveraging source-side deduplication, synthetic full backups, and instant VM recovery. The customer can back up and restore their data in minutes, minimizing the impact of downtime or data loss on their business and customers.
- Efficient storage utilization: The StoreOnce with Veeam can reduce the storage footprint and the bandwidth consumption for the customer’s backup data, by leveraging target-side deduplication, compression, and encryption. The customer can store up to 20 times more backup data on the same amount of storage, and reduce the network traffic by up to 95%, lowering their costs and risks.
- Reliable data protection: The StoreOnce with Veeam can provide the customer with multiple levels of data protection, by supporting local, remote, and cloud backup and recovery options. The customer can also leverage HPE Cloud Bank Storage, HPE’s cloud storage service, to store their backup data in the cloud, enhancing their data durability and availability. The customer can also leverage Veeam’s features, such as backup verification, ransomware protection, and data governance, to ensure their data integrity and compliance.
Data Protection backup: full/synthetic full local backup rotation
The full/synthetic full local backup rotation is a backup strategy that involves creating a full backup of the customer’s VMware cluster once, and then creating synthetic full backups periodically by combining the previous full backup with the incremental backups. This strategy offers the following benefits for the customer:
- Reduced backup window: The full/synthetic full local backup rotation can reduce the time and resources required to create full backups, by eliminating the need to read the entire data set from the source every time. The customer can create synthetic full backups faster and more efficiently, without impacting their production environment or performance.
- Improved recovery point objective (RPO): The full/synthetic full local backup rotation can improve the frequency and granularity of the customer’s backups, by creating incremental backups daily or more often. The customer can capture the changes in their data more frequently, reducing the amount of data loss in case of a disaster or failure.
- Simplified backup management: The full/synthetic full local backup rotation can simplify the customer’s backup management, by reducing the number of backup files and chains that need to be maintained and monitored. The customer can also leverage Veeam’s features, such as backup copy jobs, backup retentionpolicies, and backup reports, to automate and optimize their backup processes and operations.
Your customer would like to adopt a pay-per-use consumption model with GreenLake for Private Cloud Enterprise.
Which important factor should you share with them?
Options:
PCE bare metal servers are not included in pay-per-use billing.
PCE is pay-per-use but must be serviced by an HPE Partner.
The pay-per-use billing includes a minimum reserve.
They can select any pay-per-use unit of measure they prefer.
Answer:
CExplanation:
HPE GreenLake for Private Cloud Enterprise is a fully managed cloud service that delivers a public cloud-like experience for bare metal, containers, and VMs in your private environment. It is a true pay-per-use solution that allows you to pay for what you use, subject to a minimummonthly reservation fee. The minimum reserve is based on the expected usage of the infrastructure and can be adjusted as needed. The minimum reserve ensures that you have enough capacity to meet your performance and availability requirements, while also benefiting from the flexibility and scalability of the pay-per-use model. You can monitor your usage and billing through the HPE GreenLake Central portal, which provides consumption analytics and insights. References: HPE GreenLake for Private Cloud Enterprise, HPE GreenLake for private cloud data sheet, Modern private cloud made easy: HPE unveils HPE GreenLake for Private Cloud Enterprise
Match the use case with the appropriate cloud deployment model.
Options:
Answer:
Explanation:
According to the HPE Edge-to-Cloud Adoption Framework, page 5, the following matches are correct:
- The cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units). = Private Cloud
- The cloud infrastructure is provisioned for exclusive use case specific group of consumers from organizations that have shared concerns (e.g., mission, security requirements, policy, and compliance considerations). = Community Cloud
- The cloud infrastructure is provisioned for open use by everyone. It may be owned, managed, and operated by a business, academic, or government organization, or some combination of them. = Public Cloud
- The cloud infrastructure is a composition of two or more distinct cloud infrastructures that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability. = Hybrid Cloud
- Private Cloud: A private cloud is a cloud deployment model where the cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units). It may be owned, managed, and operated by the organization, a third party, or some combination of them, and it may exist on or off premises. A private cloud offers the organization more control, security, and customization over their cloud resources, but it also requires more investment, maintenance, and expertise1.
- Community Cloud: A community cloud is a cloud deployment model where the cloud infrastructure is provisioned for exclusive use by a specific group of consumers from organizations that have shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be owned, managed, and operated by one or more of the organizations, a third party, or some combination of them, and it may exist on or off premises. A community cloud offers the group of consumers more collaboration, cost-sharing, and alignment over their cloud resources, but it also requires more coordination, governance, and trust1.
- Public Cloud: A public cloud is a cloud deployment model where the cloud infrastructure is provisioned for open use by everyone. It may be owned, managed, and operated by a business, academic, or government organization, or some combination of them, and it exists on the premises of the cloud provider. A public cloud offers the consumers more scalability, flexibility, and affordability over their cloud resources, but it also requires more dependency, compliance, and security1.
- Hybrid Cloud: A hybrid cloud is a cloud deployment model where the cloud infrastructure is a composition of two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load balancing between clouds). A hybrid cloud offers the consumers more choice, agility, and innovation over their cloud resources, but it also requires more integration, management, and complexity1.
What is the correct method to calculate incremental cash flow?
Options:
Substract projected decrease in revenue or increase in costs from the initial investment.
Divide the increase in revenue by the time needed to offset the IT investment.
Multiply the increase in revenue by the time needed to offset the IT investment.
Substract projected increase in revenue or decrease in costs from the initial investment.
Answer:
DExplanation:
Incremental cash flow is the difference between the cash flow of a project with the investment and the cash flow of the same project without the investment. It represents the net change in cash flow that results from making the investment. The correct method to calculate incremental cash flow is to substract projected increase in revenue or decrease in costs from the initial investment. This method captures the additional cash inflows and outflows that are attributable to the investment, and excludes any cash flows that are unrelated to the investment. Incremental cash flow can help you evaluate the profitability and feasibility of an IT investment by comparing it to the required rate of return or the payback period. References: HPE Edge-to-Cloud Solutions - Self-Directed Lab, HPE Edge-to-Cloud Solutions - Official Certification Study Guide, Incremental Cash Flow Definition, How to Calculate Incremental Cash Flow
What is one of the key differentiators of HPE GreenLake Central?
Options:
GreenLake Central allows the customer to proactively detect security threats like ransomware.
GreenLake Central has built in capabilities to manage multi-cloud environments and multi-vendor hardware solutions like IBM, Dell.
GreenLake Central is the most cost-effective software license in the market that allows multi-cloud environment management.
GreenLake Central has a recommendation engine that helps optimize resources, predict capacity needs, and optimize cloud spending.
Answer:
DExplanation:
- HPE GreenLake Central is an advanced software-as-a-service platform that provides customers with a consistent cloud experience for all their applications and data, whether they are on-premises or off-premises1.
- HPE GreenLake Central enables customers to run, manage, and optimize their hybrid IT estate, complementing their use of public clouds and data centers1.
- One of the key differentiators of HPE GreenLake Central is its recommendation engine, which uses artificial intelligence and machine learning to provide insights and suggestions for improving the performance, efficiency, and cost of the customer’s IT environment2.
- The recommendation engine can help customers to:
The references are:
1: HPE GreenLake Central User Guide, page 1 2: HPE GreenLake Central helps customers run, manage, and optimize their hybrid cloud estate
Your customer acquired their Gen10 Plus servers through HPE GreenLake last year. They want to add HPE GreenLake for Block Storage to their GreenLake footprint. They ask you for a quote, and you inform them that you can start this process through their HPE GreenLake Central instance via a wizard-driven application.
What information do you need to gather for the wizard? (Choose two.)
Options:
Reserved capacity
HPE customer ID
Site contact info
Datacenter address
Subscription term
Answer:
C, EExplanation:
To add HPE GreenLake for Block Storage to an existing HPE GreenLake account, you need to provide some basic information to the wizard-driven application in HPE GreenLake Central. According to the HPE GreenLake for Block Storage Getting Started Guide1, you need to enter the following information:
- Site contact info: This includes the name, email, and phone number of the person who will be the primary contact for the site where the block storage service will be deployed. This is required to ensure smooth communication and coordination between HPE and the customer.
- Subscription term: This is the duration of the contract for the block storage service, which can range from 12 to 60 months. This is required to calculate the monthly billing and the minimum committed capacity for the service.
The other options are not required for the wizard. Reserved capacity is the amount of storage capacity that the customer wants to reserve for future use, which can be adjusted later. HPE customer ID is a unique identifier for the customer, which is already associated with the existing HPE GreenLake account. Datacenter address is the physical location of the datacenter where the block storage service will be deployed, which is already known to HPE from the previous HPE GreenLake service. References:
- HPE GreenLake for Block Storage Getting Started Guide
- HPE GreenLake for Block Storage planning overview
Your customer wants to implement an environment for machine learning that requires low latencies and high transfer rates.
Which technology should you recommend to the customer?
Options:
SCSI
RoCE
SMB
NFS
Answer:
BExplanation:
The technology that should be recommended to the customer for machine learning that requires low latencies and high transfer rates is RoCE. RoCE stands for RDMA over Converged Ethernet, which is a protocol that enables Remote Direct Memory Access (RDMA) over Ethernet networks. RDMA is a technology that allows direct memory access from one computer to another without involving the operating system or the CPU, thus reducing the overhead and latency of data transfers. RoCE offers the following benefits for machine learning environments:
- It supports high bandwidth and low latency data transfers, which are essential for machine learning applications that involve large amounts of data and complex computations12.
- It improves the efficiency and scalability of the network, as it reduces the CPU utilization and the network congestion caused by data transfers13.
- It leverages the existing Ethernet infrastructure, which is widely deployed and cost-effective, and does not require any specialized hardware or software14.
- It is compatible with HPE solutions, such as HPE Alletra, HPE Synergy, and HPE Apollo, which support RoCE and provide optimized infrastructure and management services for machine learning workloads567.
The other options are not suitable for machine learning that requires low latencies and high transfer rates because:
- A. SCSI stands for Small Computer System Interface, which is a set of standards for connecting and transferring data between devices, such as hard disks, tape drives, scanners, and printers. SCSI is mainly used for storage devices and does not support RDMA or Ethernet networks8.
- C. SMB stands for Server Message Block, which is a network file sharing protocol that allows access to files, printers, and other resources on a network. SMB is mainly used for file sharing and does not support RDMA or Ethernet networks9.
- D. NFS stands for Network File System, which is a distributed file system protocol that allows access to files over a network. NFS is mainly used for file sharing and does not support RDMA or Ethernet networks10.
References:
- What is RoCE? | Mellanox Technologies
- RoCE: The Key to Unlocking the Full Potential of AI - RoCE Initiative
- RoCE: The Key to Unlocking the Full Potential of AI - RoCE Initiative
- RoCE: The Key to Unlocking the Full Potential of AI - RoCE Initiative
- HPE Alletra 6000 Data Sheet
- HPE Synergy 480 Gen10 Compute Module - Overview
- HPE Apollo 6500 Gen10 Plus System - Overview
- What is SCSI (Small Computer System Interface)? - Definition from WhatIs.com
- What is SMB (Server Message Block)? - Definition from WhatIs.com
- What is NFS (Network File System)? - Definition from WhatIs.com
You need to determine whether there is resource contention between VMs in an HPE Hyper-V environment.
Which tool should you use?
Options:
HPE CloudPhvsics
HPE Sinqle Point of Connectivity Knowledqe
HPE NinjaSTARS
HPE In
Answer:
AExplanation:
HPE CloudPhysics is a SaaS-based platform that provides data-driven insights and recommendations for optimizing the performance, availability, and cost of virtualized environments. HPE CloudPhysics collects and analyzes data from various sources, such as hypervisors, VMs, storage, and network devices, and applies machine learning and analytics to identify and resolve issues, such as resource contention, misconfigurations, bottlenecks, and inefficiencies. HPE CloudPhysics can help you monitor and troubleshoot HPE Hyper-V environments, as well as compare and plan migrations to HPE GreenLake or other cloud platforms12. References: HPE CloudPhysics | HPE Store US, HPE CloudPhysics - Data Sheet
A Customer wants to expand their existing HPE SimpliVity cluster. You propose adding a similar size host with newer generation Intel CPUs.
What needs to be done to facilitate adding the new hosts to the cluster?
Options:
enable Enhanced vMotion Compatibility
disable Enhanced vMotion Compatibility
disable Distributed Resource Scheduler
enable Distributed Resource Scheduler
Answer:
AExplanation:
HPE SimpliVity is a hyperconverged infrastructure solution that combines compute, storage, networking, and data services in a single appliance1. HPE SimpliVity clusters are groups of HPE SimpliVity nodes that share the same federation and data center2. HPE SimpliVity clusters can be expanded by adding new nodes to increase the capacity and performance of the cluster3.
However, adding new nodes with newer generation Intel CPUs to an existing HPE SimpliVity cluster may cause compatibility issues for vMotion, the VMware technology that enables live migration of virtual machines between hosts4. vMotion requires that the source and destination hosts have compatible CPUs, meaning that they support thesame set of CPU features4. If the new nodes have different or additional CPU features than the existing nodes, vMotion may fail or be restricted4.
To facilitate adding the new hosts to the cluster, one possible solution is to enable Enhanced vMotion Compatibility (EVC) on the cluster5. EVC is a feature of VMware vSphere that ensures vMotion compatibility for the hosts in a cluster that are running different CPU generations5. EVC masks the CPU features that are not common among all hosts in the cluster, so that all hosts present the same CPU feature set to the virtual machines5. This way, vMotion can be performed without CPU compatibility errors5.
To enable EVC on the cluster, the following steps are required5:
- Power off all virtual machines in the cluster, or migrate them to another cluster.
- Edit the cluster settings and select the EVC mode that corresponds to the baseline CPU feature set for the cluster. The EVC mode must be equivalent to or a subset of the feature set of the host with the smallest feature set in the cluster.
- Add the new hosts to the cluster and verify that they are compatible with the EVC mode.
- Power on the virtual machines in the cluster, or migrate them back from another cluster.
By enabling EVC, the cluster can benefit from the improved vMotion compatibility and flexibility of adding new hosts with newer generation Intel CPUs. However, enabling EVC also has some limitations and trade-offs, such as5:
- EVC does not allow vMotion between hosts with different CPU vendors, such as AMD and Intel.
- EVC does not prevent vMotion from failing for other reasons, such as network or storage incompatibility.
- EVC may prevent virtual machines from accessing some CPU features that are available on newer hosts, but not on older hosts.
- EVC may not work with some applications that do not follow the CPU vendor recommended methods of feature detection.
Therefore, enabling EVC should be carefully planned and tested before adding the new hosts to the cluster.
References:
- 1: HPE SimpliVity
- 2: HPE SimpliVity User Guide
- 3: HPE SimpliVity Expansion Installation and Startup Service
- 4: VMware EVC and CPU Compatibility FAQ
- 5: About Enhanced vMotion Compatibility
Which values does the TCO Calculator provide? (Choose two.)
Options:
Return on investment
Cost to company (CTC)
Capital expenditure
Net present value
Operational expenditure
Answer:
A, DExplanation:
The TCO Calculator is a tool that helps you compare the total cost of ownership (TCO) of different IT infrastructure solutions, such as on-premises, cloud, or hybrid. The TCO Calculator provides two values that can help you evaluate the financial benefits of each solution: return on investment (ROI) and net present value (NPV).
- Return on investment (ROI) is a measure of the profitability of an investment, calculated as the ratio of the net income generated by the investment to the initial cost of the investment. A higher ROI indicates a more profitable investment. The TCO Calculator estimates the ROI of each solution over a specified period of time, based on the expected savings, costs, and revenues.
- Net present value (NPV) is a measure of the present value of the future cash flows of an investment, discounted by a certain interest rate. A positive NPV indicates that the investment is worth more than its initial cost. The TCO Calculator estimates the NPV of each solution over a specified period of time, based on the expected savings, costs, revenues, and discount rate.
The TCO Calculator can help you compare the ROI and NPV of different IT infrastructure solutions and choose the one that best suits your business needs and goals. References: TCO & ROI Calculators for IT Infrastructure – Total Cost of Ownership | HPE, TCO Calculator User Guide, TCO Calculator FAQ
Match the tool with its description.
Options:
Answer:
Explanation:
- Scales up and down to meet workload requirements. = Both
- Can include infrastructure, colocation, power and cooling in a single bill. = HPE GreenLake
- Customer retains ownership of the assets. = Customer
- HPE retains ownership of the assets. = HPE GreenLake
- Customer can add HPE GreenLake Management Service. = HPE GreenLake
- Both: Both customer and HPE GreenLake solutions can scale up and down to meet workload requirements, depending on the type of solution and the contract terms. For example, a customer can purchase a scalable solution, such as HPE Synergy or HPE SimpliVity, and add or remove resources as needed. Alternatively, a customer can use HPE GreenLake, which offers a pay-per-use model, and adjust the capacity and performance on demand.
- HPE GreenLake: HPE GreenLake is a consumption-based model where the customer pays only for the resources they use, on a monthly basis. HPE owns the assets and provides them as a service to the customer, along with the necessary support, management, and optimization. HPE GreenLake can include infrastructure, colocation, power and cooling in a single bill, depending on the customer’s needs and preferences. HPE GreenLake also allows the customer to add HPE GreenLake Management Service, which is a cloud-native platform that provides a unified and consistent experience for managing and optimizing HPE’s edge-to-cloud solutions.
- Customer: Customer is a traditional acquisition model where the customer pays upfront for the hardware, software, and services they need. The customer owns the assets and is responsible for their maintenance, management, and upgrade. This model gives the customer more control, security, and customization over their assets, but it also requires more investment, maintenance, and expertise.
Your customer needs a single-node 60TB S3 target for some of their applications.
Which solution meets their requirements?
Options:
HPE MSA 2062
HPE Alletra 6030
Scality RING
Scality ARTESCA
Answer:
DExplanation:
Scality ARTESCA is a lightweight, cloud-native object storage solution that can run on a single node and provide S3-compatible API for applications. It is designed to deliver high performance, scalability, and resilience for edge-to-cloud workloads. Scality ARTESCA can support up to 64TB of usable capacity per node, which meets the customer’s requirement of 60TB. Scality ARTESCA also offers features such as data protection, encryption, replication, erasure coding, and multi-tenancy. Scality ARTESCA is part of the HPE Edge-to-Cloud portfolio and can be deployed on HPE ProLiant servers or HPE Apollo systems. References: Scality ARTESCA, HPE Edge-to-Cloud Solutions, HPE and Scality ARTESCA: Cloud-native object storage for edge-to-cloud data
A customer is looking for a Storage solution that meets their requirement of a 3-2-1 data protection architecture. They are already using Veeam to protect their workloads.
Which HPE solution should you propose to address this requirement?
Options:
HPE GreenLake Central
HPE Data Services Cloud Console
HPE Alletra Peer Persistence
HPE Cloud Volumes Backup
Answer:
DExplanation:
HPE Cloud Volumes Backup is a cloud-native backup storage target that enables customers to seamlessly back up data from their on-premises arrays to the cloud, without changing their existing data protection workflows. HPE Cloud Volumes Backup supports various backup applications, including Veeam, and integrates with them through a secure client that runs on the customer’s premises. HPE Cloud Volumes Backup can help customers meet the 3-2-1 data protection architecture, which recommends having at least three copies of data, on two different media, with one copy off-site. By using HPE Cloud Volumes Backup, customers can have an off-site copy of their data in the cloud, which can be easily restored to their on-premises storage or to HPE Cloud Volumes Block, a cloud-native block storage service. HPE Cloud Volumes Backup offers flexible and predictable pay-per-use pricing, as well as security, scalability, and simplicity features. References:
Match each solution to the appropriate customer.
Options:
Answer:
Explanation:
- Customers looking to unlock agility and collapse data management silos. = Alletra
- Customers looking to run apps and collapse infrastructure management silos. = SimpliVity
- Customers looking to run apps and build a resilient enterprise. = Zerto
- Alletra: Alletra is a data services platform that offers cloud-native data management and storage solutions for various workloads and applications. It can help customers unlock agility and collapse data management silos, by enabling them to provision, monitor, and optimize their data services from a single cloud console, and to integrate with various cloud providers and services12.
- SimpliVity: SimpliVity is a hyperconverged infrastructure solution that combines compute, storage, networking, and data protection in a single appliance. It can help customers run apps and collapse infrastructure management silos, by simplifying and streamlining their IT operations, reducing costs and complexity, and improving efficiency and performance34.
- Zerto: Zerto is a data protection and disaster recovery solution that offers continuous data replication, backup, and recovery for various workloads and applications. It can help customers run apps and build a resilient enterprise, by enhancing their data availability, security, and compliance, and by enabling them to recover from any disruption or failure in minutes5 .
What information should you gather during a physical site survey? (Choose two.)
Options:
existing VLANs
datacenter dimensions
current patch levels
existing disk capacity
fire suppression capabilities
Answer:
B, EExplanation:
A physical site survey is a detailed inspection and assessment of a plot of land or a facility where work is proposed, to gather information for a design or an estimate to complete the initial tasks required for an outdoor or indoor activity12. It can determine a precise location, access, best orientation for the site and the location of obstacles3. A physical site survey is important for any architectural or engineering project, as it provides critical information that affects everything from project planning to execution2.
One of the areas where a physical site survey is required is the datacenter, where HPE Edge-to-Cloud Solutions can be deployed to provide a hybrid cloud operating model across all workloads and data4. A datacenter is a facility that houses computing and networking equipment, as well aspower, cooling, security, and fire suppression systems5. A physical site survey of a datacenter should gather information such as:
- Datacenter dimensions: The size and shape of the datacenter, including the floor area, ceiling height, and available space for racks, cabinets, and equipment. This information is essential for determining the optimal layout and design of the datacenter, as well as the capacity and scalability of the infrastructure5.
- Fire suppression capabilities: The type and availability of fire suppression systems in the datacenter, such as sprinklers, gas, or foam. This information is important for ensuring the safety and reliability of the datacenter, as well as complying with the fire codes and regulations5.
Other information that may be gathered during a physical site survey of a datacenter include:
- Power and cooling requirements: The amount and quality of power and cooling available in the datacenter, as well as the backup and redundancy options. This information is crucial for ensuring the performance and availability of the datacenter, as well as minimizing the energy consumption and costs5.
- Network connectivity and security: The type and speed of network connections and services available in the datacenter, as well as the security measures and policies in place. This information is vital for ensuring the connectivity and security of the datacenter, as well as supporting the edge-to-cloud solutions and services5.
The other options, such as existing VLANs, current patch levels, and existing disk capacity, are not information that should be gathered during a physical site survey, as they are more relevant for a logical or technical site survey. A logical or technical site survey is a different type of site survey that focuses on the existing or planned configuration and functionality of the IT systems and devices in a site, such as servers, storage, switches, routers, firewalls, and software. A logical or technical site survey is usually performed after a physical site survey, and may require additional tools and methods, such as network analyzers, ping tests, traceroute, and SNMP.
References:
- 4: Edge to Cloud | HPE GreenLake | HPE - Hewlett Packard Enterprise
- 1: Physical Site Survey - Aruba
- 2: What is an architecture Site Survey? Understanding their importance… - archisoup | Architecture Guides & Resources
- 3: Site survey - Wikipedia
- 5: What is a Site Survey and Why is it Important - Emlii
- : [Logical Site Survey - Aruba]
Your customer has asked for a new storage platform for their mission critical Microsoft SQL environment. The environment has outgrown its current platform and is expected to see additional growth as new applications are brought online. You recommend HPE Alletra 9000.
What HPE Alletra 9000 business values should you discuss as part of your recommendation? (Choose two.)
Options:
distributed parallel access
100% uptime guarantee
Enterprise scalability and performance
mainframe and open systems support
built-in continuous data protection
Answer:
B, CExplanation:
HPE Alletra 9000 is a cloud-native data infrastructure that delivers a cloud experience for mission-critical workloads. It is designed to provide extreme low-latency, reliability, and performance density in a 4U enclosure. Some of the business values of HPE Alletra 9000 are:
- 100% uptime guarantee: HPE Alletra 9000 ensures resiliency with a guaranteed 99.9999% data availability, backed by the HPE Alletra Availability Guarantee program1. It also offers built-in data protection, replication, and disaster recovery capabilities to safeguard data from any eventuality2.
- Enterprise scalability and performance: HPE Alletra 9000 delivers consistent sub-millisecond latency and up to 15M IOPS for the most demanding applications1. It also supports massive scalability, with up to 96 NVMe drives and 6PB of raw capacity per system, and up to 4 systems in a single namespace2. HPE Alletra 9000 can also easily extend to the cloud with consistent data services and native integration with cloud platforms3.
References: HPE Alletra 9000 | HPE Store Australia, Document Display | HPE Support Center, Transform Your Business with the HPE Alletra 9000 Series
Your customer needs a storage solution for their geographical information system (GIS) video storage. The software they use requires an SMB file share and writes video files of various sizes based on the bitrate of the cameras. They keep all video for a minimum of 10 years. They expect the size of their archive to grow over as cameras charge.
Which solution meets the customer’s requirements?
Options:
HPE Solutions for Qumulo
HPE Solutions for Cohesity
HPE Solutions for Weka
HPE Solutions for Scality ARTESCA
Answer:
AExplanation:
HPE Solutions for Qumulo is a file data platform that provides scalable, secure, and cost-effective storage for unstructured data, such as GIS video files. HPE Solutions for Qumulo supports SMB file shares and can handle various file sizes and types with high performance and efficiency. HPE Solutions for Qumulo also offers data protection, encryption, and replication features to ensure long-term data retention and durability. HPE Solutions for Qumulo can easily grow with the customer’s data needs, as it supports up to 64 nodes per cluster and up to 1.6PB of usable capacity per 4U node12. References: HPE Solutions for Qumulo | HPE Store US, HPE Solutions for Qumulo - Data Sheet
You need to include non GreenLake enabled ISVs in a customer solution.
With whom should you engage if you need help with this solution?
Options:
HPE Pointnext advisory services
HPE ProLiant product management
HPE Pointnext operational services
HPE Complete product management
Answer:
DExplanation:
HPE Complete is a program that provides a one-stop shop for validated HPE and third-party partner end-to-end infrastructure solutions. HPE Complete and third-party engineering validates the interoperability and reliability of HPE Complete third-party products with HPE storage, server, and networking solutions. HPE Complete product management is responsible for managing the portfolio of third-party products and solutions that are part of the HPE Complete program. If you need to include non GreenLake enabled ISVs in a customer solution, you should engage with HPE Complete product management to find the best fit for your customer’s needs and goals. References: HPE Complete 3rd Party Technology Partner Products & Solutions, HPE Complete Care Service
Your client needs a departmental Storage array to host a VDI workload at a remote office. The networking infrastructure is limited and the client has decided to connect the ESXi host servers with 12Gbps SAS.
Which HPE Storage product will meet their requirements?
Options:
HPEAIIetra 9060
HPE Nimble AF20
HPE MSA 2062
HPE Alletra 6030
Answer:
CExplanation:
The HPE MSA 2062 is a cost-effective, entry-level storage array that supports 12Gbps SAS connectivity to the host servers. It is designed for small and medium-sized businesses, remote offices, and departmental workloads. It can host up to 325 virtual desktops with a single array and deliver up to 325,000 IOPS of performance. It also offers advanced data services such as snapshots, replication, encryption, and tiering. The HPE MSA 2062 is compatible with VMware vSphere and can be easily managed with HPE OneView or HPE StoreOnce. References:
- [HPE MSA 2062 Storage]
- [HPE MSA Storage Configuration and Best Practices for VMware vSphere]
- [HPE MSA Storage Solutions]
Your customer needs the smallest physical footprint for their S3 bucket storage requirement.
Which HPE alliance platform should you recommend?
Options:
HPE Solutions for Scality Ring
HPE Solutions for Qumulo
HPE Solutions for Cohesitv
HPE Solutions for Scality Artesca
Answer:
DExplanation:
HPE Solutions for Scality Artesca is a lightweight, cloud-native object storage platform that supports S3 bucket storage for modern applications. It is designed to run on Kubernetes and bare metal servers, and can be deployed on a single node or scaled out to petabytes. It offers high performance, durability, and federated data management across edge, core, and cloud environments. It also has a low total cost of ownership and a flexible consumption model with HPE GreenLake. Compared to the other options, HPE Solutions for Scality Artesca has the smallest physical footprint and the most portability and scalability for S3 bucket storage requirements. References:
- HPE Object Based Storage for Scality Solutions
- HPE and Scality ARTESCA deliver best edge-to-cloud-to-core data management experience for AI/ML
- HPE Solutions with Scality
- Scality and Hewlett Packard Enterprise unveil ARTESCA: lightweight, true enterprise-grade, cloud-native object storage software for Kubernetes
- Scality and HPE Launch Object Storage Software for Kubernetes