Uncategorized

TAP Partner Feature: Chatterbox Labs & VMware; Addressing Multicloud Responsible and Private AI Together

As Artificial Intelligence (AI) scales across Government & Enterprise, it is critical that appropriate guardrails are put in place to ensure it operates in a manner that is ethical, trustworthy, responsible, safe and secure. 

Whilst media focuses attention on headline AI use cases such as self-driving cars, public and private enterprises seek to leverage AI for process and workflow automation, data insights, workforce productivity, improved customer/citizen experience, and content augmentation. Yet, as these possible uses could dramatically impact people’s lives and essential business functions, the unintended consequences can be severe.

Regulation is falling into place to address risks, with the EU’s AI Act, the USA’s Algorithmic Accountability Act & AI Bill of Rights, and Canada’s AIDA & CPPA all in active development (with the EU AI Act set to go live imminently).  This is not to mention a plethora of sector and jurisdictional specific laws, many of which are already in place.

Many organizations start with policies and frameworks for Responsible AI.  However, these are just one part of the puzzle.  These organizations must also quantitatively measure their AI models (and associated data) to understand their operation and compliance to said policies and frameworks.  This needs to scale across all their AI models and data in a comparable and repeatable manner.  Without this measurement an organization is operating in the dark with no automated way to ensure compliance with laws, policies and stakeholder equities.

Chatterbox Lab’s patented AI Model Insights (AIMI)  platform solves this need for an automated and scalable Responsible AI capability. AIMI does not replace an organization’s AI or data assets, instead it sits as an independent layer across their AI models to independently generate Responsible AI insights across eight pillars: Explain, Actions, Fairness, Robustness, Trace, Testing, Imitation & Privacy.   It can be accessed via a browser-based UI or programmatically via APIs, and supports reporting across three organizational groups:  Science, Evaluation (consisting of Subject Experts, Legal, Governance, Compliance & Regulatory) and Leadership (for approval).

As organizations pursue operationalizing AI, they encounter barriers around requisite data governance, confidentiality, sovereignty and privacy, and the network and compute infrastructure to move data and run models. Schuyler Moore, U.S. Central Command’s CTO, recently stated, “Algorithms on their own are increasingly less interesting to us…the question is, do they run on the network with the right classification of other data that we need. …If you think about data being the limiting factor for maturity and function of a model, we at the edge have found that network infrastructure and function is the limiting factor for adoption and use of anything.”  

VMware delivers an independent layer across multicloud environments, agnostic to the underlying cloud or legacy architectures where ML/AI and data workloads operate.  VMware is the leader in managing and scaling workloads via a consistent and repeatable multicloud operating model. Now, as organizations contend with a growing need for ML/AI workload management, VMware announced VMware Private AI, an architecture that enables organizations the flexibility to build and train their own AI models on their own data in a secure,  private and highly distributed multicloud fashion that ensures choice of AI services, confidentiality, unified management and operations, increased efficiency, and data privacy/control.

Over the past two years, Chatterbox Labs and VMware collaborated around shared core values of independence and agnosticism to enable scaling Chatterbox’s patented AIMI platform across VMware’s multicloud architecture. Independence is key in this field and is an important driver in how the partnership between Chatterbox Labs and VMware enable consuming underlying AI services.  For Responsible AI, it is critical that tools and techniques used to build the AI are not the same as those used to evaluate it (one must not grade their own homework).  Therefore, AIMI does not build any AI models and has no vested interest in their performance.  AIMI is independent to the underlying AI model architecture, allowing teams the freedom to build AI models using which ever tools, services and technique they wish, yet still providing a layer of consistent and repeatable Responsible AI metrics.

As your organization moves forward to build, train, and operationalize AI, the partnership of Chatterbox Labs and VMware delivers Responsible AI validation for ethical, trustworthy, responsible and safe AI ops within a secure and private multicloud architecture that enhances control and access to your data and model providers of choice.

To learn more about Chatterbox Labs AI Model Insights (AIMI) visit https://chatterbox.co/products/

To learn more about the VMware Technology Alliance Partner (TAP) Program visit https://techpartnerhub.vmware.com/programs/tap-program