Red Hat has rolled out an exciting update to its cloud-based AI and machine learning platform, Red Hat OpenShift AI, introducing a series of new features aimed at enhancing AI model management and deployment. The platform’s version 2.15, set to be generally available by mid-November, brings improvements including a model registry with advanced versioning and tracking features, data drift detection tools, and bias detection functionalities. Additionally, the update focuses on enhancing security measures for users working with AI models in production environments.
Among the standout additions is the model registry, which is currently available in a technology preview. This feature offers a structured approach to share, version, deploy, and track models, metadata, and related artifacts. The registry helps data scientists and AI engineers maintain control over the lifecycle of machine learning models, ensuring smooth updates and deployments. With this registry, teams can efficiently manage multiple versions of their models, making the deployment process more organized and transparent.
Another significant enhancement in Red Hat OpenShift AI is data drift detection. This tool helps monitor any changes in the input data distributions used by deployed machine learning models. By identifying when live data starts to diverge significantly from the original training data, data drift detection allows teams to assess whether their models are still performing accurately or if they require retraining. This functionality is crucial in maintaining model reliability in dynamic real-world environments.
The update also introduces bias detection tools, sourced from the TrustyAI open-source community, which are designed to monitor models for fairness and potential bias during their deployment. These tools help data scientists ensure that AI models are equitable and non-discriminatory, addressing the growing concern of AI fairness in real-world applications. Furthermore, Red Hat has incorporated LoRA (low-rank adaptation) fine-tuning capabilities, allowing for more efficient fine-tuning of large language models like Llama 3. This feature helps organizations scale their AI workloads while minimizing resource consumption and costs.
Additional support for Nvidia NIM microservices and AMD GPUs further strengthens Red Hat OpenShift AI, offering enhanced performance for generative AI applications. The platform now also supports the AMD ROCm workbench image, providing users with a streamlined environment for using AMD GPUs in model development. These new integrations ensure that Red Hat OpenShift AI remains a powerful and versatile platform for businesses looking to harness the full potential of AI and machine learning technologies.