Home EnterpriseAI HPE and NVIDIA Accelerate Enterprise AI with Expanded Portfolio and Enhanced Capabilities

HPE and NVIDIA Accelerate Enterprise AI with Expanded Portfolio and Enhanced Capabilities

by Harold Fritts

HPE Private Cloud AI, powered by NVIDIA, streamlines enterprise AI deployment with enhanced security, efficiency & scalability.

Hewlett Packard Enterprise (HPE) and NVIDIA have introduced a range of enhanced enterprise AI solutions designed to streamline the deployment of generative, agentic, and physical AI workloads. Under the NVIDIA AI Computing by HPE banner, these new offerings enhance performance, power efficiency, security, and capability, forming a comprehensive, turnkey private cloud tailored to enterprises of all sizes. This allows organizations to train, fine-tune, and deploy advanced AI models efficiently.

HPE Private Cloud AI

Antonio Neri, President and CEO at HPE, highlighted the increasing importance of integrated AI solutions, emphasizing that collaboration with NVIDIA accelerates enterprises’ ability to harness AI to improve productivity and create new revenue streams. Jensen Huang, NVIDIA’s founder and CEO, reinforced this, noting the transformative impact of AI across industries. Huang described the NVIDIA-HPE collaboration as essential for enterprises building scalable AI factories, unlocking productivity and innovation from generative and agentic AI to robotics and digital twins.

Turnkey AI Deployment with HPE Private Cloud AI and NVIDIA AI Data Platform

The HPE Private Cloud AI platform now supports the NVIDIA AI Data Platform, providing enterprises with the fastest route to extracting actionable insights from their data. Utilizing the self-service capabilities of HPE GreenLake cloud, the integration enables continuous, accelerated data processing leveraging NVIDIA’s accelerated computing, networking, AI software, and enterprise storage.

Joint development efforts between HPE and NVIDIA ensure rapid deployment of industry-leading blueprints and models, including NVIDIA AI-Q Blueprints and NVIDIA NIM microservices optimized for the powerful NVIDIA Llama Nemotron models, enhancing reasoning capabilities in AI systems.

Additional HPE Private Cloud AI enhancements include a dedicated AI developer system offering an instant AI development environment with integrated control nodes, NVIDIA accelerated computing, AI software stacks, and 32TB integrated storage. Further, the HPE Data Fabric Software introduces a unified, seamless data layer, ensuring consistent data quality across hybrid environments for structured, unstructured, and streaming data.

Including pre-validated NVIDIA Blueprints, such as Multimodal PDF Data Extraction and Digital Twins, significantly accelerates time to value, enabling enterprises to deploy sophisticated AI-driven workloads quickly.

AI-native Observability with HPE OpsRamp

HPE OpsRamp introduces full-stack GPU optimization for AI-native software stacks. This new feature provides enterprises with comprehensive observability to monitor and optimize the performance of large-scale NVIDIA accelerated computing clusters. As part of HPE Private Cloud AI and a standalone offering, these capabilities simplify ongoing management and operational support.

Additionally, HPE offers new Day 2 operational services combining HPE Complete Care Service with NVIDIA GPU optimization, allowing IT teams to proactively manage, troubleshoot, and optimize complex AI workloads across hybrid environments.

Agentic AI Use Cases and Professional Services

To address the growing market demand for agentic AI, HPE announced several strategic expansions. HPE and Deloitte are bringing Deloitte’s Zora AI for Finance to market via HPE Private Cloud AI, transforming traditional executive financial reporting into dynamic, interactive experiences. Initial deployments will support financial statement analysis, scenario modeling, and competitive market assessments, with HPE adopting the solution first.

Furthermore, CrewAI joined HPE’s Unleash AI program, integrating multi-agent automation with HPE Private Cloud AI to rapidly develop and deploy tailored AI agents, driving organizational efficiency and more intelligent decision-making.

Further enhancing enterprise AI adoption, HPE now provides targeted professional services to identify, build, and deploy agentic AI solutions, utilizing NVIDIA’s robust NIM microservices and NeMo technology, further accelerating business value.

New HPE Servers Powered by NVIDIA Blackwell Architecture

NVIDIA AI Computing by HPE unveiled next-generation servers leveraging NVIDIA’s Blackwell Ultra and Blackwell architectures, optimized for AI training, fine-tuning, and inference workloads. Among these new systems, the NVIDIA GB300 NVL72 by HPE supports service providers and enterprise deployments for massive AI clusters. It features optimized compute, expansive memory, and advanced networking enhanced by HPE’s industry-leading liquid cooling expertise.

HPE ProLiant Compute DL380a Gen12

The new HPE ProLiant Compute XD servers with NVIDIA HGX B300 platforms address highly demanding workloads such as agentic AI and reasoning inference. HPE’s ProLiant Compute DL384b Gen12, featuring NVIDIA GB200 Grace Blackwell NVL4 Superchips, revolutionizes performance for converged HPC and AI workloads, including graph neural network training and scientific computing. The HPE ProLiant Compute DL380a Gen12 with NVIDIA RTX PRO 6000 Blackwell Server Edition targets enterprise inferencing and visual computing workloads.

Enhanced Full Lifecycle Security with HPE ProLiant Gen12

The new ProLiant Compute Gen12 series servers deliver advanced security at every lifecycle stage, anchored by HPE’s silicon root of trust technology. Enhanced by a dedicated secure enclave security processor, these servers establish a robust chain of trust, safeguarding against firmware-level threats from manufacturing through delivery. HPE’s iLO 7, the industry’s first server management solution incorporating post-quantum cryptography, meets stringent FIPS 140-3 Level 3 security certification, providing robust protection against emerging threats.

Modular, Energy-efficient Data Centers Tailored for AI

HPE continues its five-decade legacy in advanced liquid cooling technology, proven in eight of the world’s top 15 most energy-efficient supercomputers (Green500 ranking). To address escalating energy demands from AI workloads, HPE’s new modular AI Mod POD offers a high-density, performance-optimized data center solution supporting up to 1.5MW per module. Utilizing patented Adaptive Cascade Cooling technology, the modular infrastructure supports both traditional air cooling and 100% liquid-cooled environments, significantly reducing energy consumption and accelerating time to deployment.

HPE Private Cloud AI

Availability

  • HPE Private Cloud AI developer system: Available Q2 2025.
  • HPE Data Fabric within Private Cloud AI: Available Q3 2025.
  • NVIDIA GB300 NVL72 and HPE ProLiant Compute XD with NVIDIA HGX B300: Available in the second half of 2025.
  • HPE ProLiant Compute DL384b Gen12 with NVIDIA GB200 NVL4: Available Q4 2025.
  • HPE ProLiant Compute DL380a Gen12 with NVIDIA RTX PRO 6000 Blackwell Server Edition: Available Q3 2025.
  • Pre-validated NVIDIA Blueprints: Available Q2 2025.
  • HPE OpsRamp GPU optimization and AI Mod POD: Immediately available.

Engage with StorageReview
Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | TikTok | RSS Feed