Artificial Intelligence

JFrog and Qwak create secure MLOps workflows

28th February 2024
Harry Fowle
0

JFrog has announced a new technology integration with Qwak that brings machine learning models alongside traditional software development processes to streamline, accelerate, and scale the secure delivery of ML applications.

“Currently, data scientists and ML engineers are using a myriad of disparate tools, which are mostly disconnected from standard DevOps processes within the organisation, to mature models to release. This slows MLOps processes down, compromises security, and increases the cost of building AI powered applications,” said Gal Marder, Executive Vice President of Strategy, JFrog. “The combination of the JFrog Platform – with Artifactory and Xray at its core - plus Qwak provides users with a complete MLSecOps solution that brings ML models in line with other software development processes, creating a single source of truth for all software components across Engineering, MLOps, DevOps and DevSecOps teams so they can build and release AI applications faster, with minimal risk and less cost.”

Uniting JFrog Artifactory and Xray with Qwak’s ML Platform brings ML apps alongside all other software development components in a modern DevSecOps and MLOps workflow, enabling data scientists, ML engineers, Developers, Security, and DevOps teams to easily build ML apps quickly, securely, and in compliance with all regulatory guidelines. The native Artifactory integration connects JFrog’s universal ML Model registry with a centralised MLOps platform so users can easily build, train, and deploy models with greater visibility, governance, versioning, and security. Using a centralised platform for ML model deployment also allows users to focus less on infrastructure and more on their core data science tasks.

IDC research indicates that while AI/ML adoption is on the rise, the cost of implementing and training models, shortage of trained talent, and absence of solidified software development life-cycle processes for AI/ML are among the top three inhibitors to realising the full benefits of AI/ML at scale.

"Building ML pipelines can be complicated, time-consuming, and costly to organisations looking to scale their MLOps capabilities. These homegrown solutions are not equipped to manage and protect the process of building, training, and tuning ML models at scale with little to no audibility," said Jim Mercer, Program Vice President Software Development, DevOps, and DevSecOps. "Having a single system of record that can help automate the development, providing a documented chain of provenance, and security of ML models alongside all other software components offers a compelling alternative for optimising the ML process while injecting more model security and compliance.”

Without the right infrastructure, platform and processes needed for ML operations (MLOps), it’s challenging to build, manage, and scale complex ML infrastructure, deploy models quickly, and secure them without incurring excessive costs. Companies often struggle to manage infrastructure complexity causing expensive and time-consuming authentication and security protocols between various development environments.

“AI and ML have recently transformed from being a distant future prospect to a ubiquitous reality. Building ML models is a complex and time-intensive process, which is why many data scientists are still struggling to turn their ideas into production-ready models,” said Alon Lev, CEO, Qwak.  “While there are plenty of open source tools on the market, putting all of those together to build a comprehensive ML pipeline isn’t easy, which is why we’re thrilled to work with JFrog on a solution for automating ML artifacts and releases in the same, secure way customers manage their software supply chain with JFrog Artifactory and Xray.”

Proof of why having secure, end-to-end MLOps processes is imperative was further confirmed by the JFrog Security Research team in their discovery of malicious ML Models in Hugging Face, a widely used AI model repository. Their research found that several malicious ML Models housed in Hugging Face posed the threat of code execution by threat actors, which could lead to data breaches, system compromise, or other malicious actions.

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier