Companies are betting big on machine learning (ML). According to IDC, 85% of the world’s largest organizations will use artificial intelligence (AI) – including machine learning (ML), natural language processing (NLP) and pattern recognition – by 2026.
And a survey conducted by ESG found that “62% of organizations expect to increase their AI spending year-over-year, including investments in people, process and technology.”
But despite all the money invested in ML projects, most organizations struggle to get their ML models and applications working on production systems.
Gartner Market Research claims that “only half of AI projects move from pilot to production, and those that do take an average of nine months to do so.”
IDC’s numbers look even worse, with only 31% of companies surveyed saying AI works in production. Further, “Of the 31% of AI in production, only one-third report having reached a mature adoption stage in which the entire organization benefits from an enterprise-wide AI strategy. business.”
And another recent survey has the worst numbers of all, finding that 90% of ML models are not deployed in production.
So what is the problem? Why are so many companies struggling to achieve their ML goals?
The problem with ML
Industry watchers suggest that companies’ struggles with ML come down to two key factors: process and infrastructure.
On the process side, most ML projects require the integration of multiple teams and systems. A report from Omdia notes: “Successful enterprise ML at scale requires the careful orchestration of a complex tapestry of people, processes and platforms, an effort that does not stop when a ML solution goes live, but continues for the life of the solution. .”
Many companies do not yet have repeatable processes in place to meet these needs. As a result, data scientists often spend too much time on computational operations tasks, such as determining how to allocate computational resources, rather than building and training data science models.
These issues are exacerbated by a lack of hardware designed for ML use cases. According to Gartner, “86% of organizations have identified at least one of the following areas as a weak link in their AI infrastructure stack: GPU processing, CPU processing, data storage, networking, resource sharing, or integrated development environments”.
IDC agrees. “IDC research consistently shows that insufficient or lack of purpose-built infrastructure capabilities are often the cause of failed AI projects,” said Peter Rutten, vice president of research at IDC. IDC and head of global research on high-performance supercomputing solutions.
The promise of MLOps
So how can companies overcome these challenges? A partial solution lies in the adoption of MLOps.
In its simplest form, MLOps is defined as the application of the principles of the DevOps movement to machine learning. Cnvrg.io, which has built ready-to-use open-source ML pipelines that can run on any infrastructure, explains that MLOps “reduces friction and bottlenecks between ML development teams and engineering teams. engineering in order to operationalize the models”. He adds, “It’s a discipline that seeks to systematize the entire lifecycle of ML.”
The approach works. Organizations that have implemented MLOps report up to 10x increase in productivity, 5x faster model training, and up to 50% increase in compute usage according to cnvrg.io research.
It should come as no surprise, then, that IDC predicts: “By 2024, 60% of enterprises will have operationalized their ML workflows with MLOps/ModelOps capabilities and infused AI into their IT infrastructure operations with AIOps capabilities”.
Infrastructure designed for MLOps
But MLOps is only part of the answer. Companies also need infrastructures designed to meet the needs of ML and, more specifically, the needs of MLOps. With that in mind, Dell Technologies recently rolled out its Dell Validated Design for AI, built in collaboration with cnvrg.io.
It meets the need for fast computing with VxRail HCI V670 or PowerEdge R750a servers. The Dell design complements processors with industry-leading NVIDIA A100 or A30 GPUs. 25GbE PowerSwitch S5248F-ON or NVIDIA® Spectrum® SN3700 and Out-of-Band PowerSwitch S4148T-ON — provide the speed and bandwidth needed for MLOps. And PowerScale F600 or H600 provides highly scalable storage. It all ties into cnrg.io’s MLOps stack, VMware Tanzu, and NVIDIA AI Enterprise software.
Dell Infrastructure is also part of Intel’s cnvrg.io metacloud, giving AI developers the flexibility to run, test and deploy AI and ML workloads on mixed hardware within the same workflow. AI/ML work or pipeline. Metacloud leverages cloud-native technologies such as containers and Kubernetes, making it quick and easy for developers to select infrastructure located on-premises, co-located, and in any public cloud and run the workload.
With the right processes and infrastructure, companies can overcome the challenges inherent in large-scale ML and begin to achieve the goals of their machine learning projects.
Intel® Technologies Advance Analytics
Data analytics is the key to unlocking the maximum value you can extract from your organization’s data. To create a productive, cost-effective scanning strategy that gets results, you need high-performance hardware that’s optimized to work with the software you’re using.
Modern data analytics covers a range of technologies, from dedicated analytics platforms and databases to deep learning and artificial intelligence (AI). New to analytics? Ready to evolve your analytics strategy or improve your data quality? There’s always room to grow, and Intel is ready to help. With a broad ecosystem of analytics technologies and partners, Intel accelerates the efforts of data scientists, analysts, and developers across industries. Learn more about Intel Advanced Analytics.