Soma CapitalSoma Capital
a
David Lu
David Lu
CS at Stanford, Scout at Soma | prev at Verkada, Battery Ventures, NGP Capital
Published on 5/4/2024
The Rise of Responsible AI: Trends and Innovations
Investment Thesis
As ML and LLM adoption increases, there is an urgent need for responsible and explainable AI.
David Lu
David Lu
CS at Stanford, Scout at Soma | prev at Verkada, Battery Ventures, NGP Capital
Published on 5/4/2024
image

Responsible AI landscape

The responsible AI landscape is quickly heating up, fueled by stricter regulations, ethical mandates, and market demand for secure, fair, and privacy-preserving AI. As companies rush to deploy AI responsibly, startups creating tools that ensure security, privacy, and fairness across the AI lifecycle are in the spotlight. This shift isn't just about compliance; it's about leading in a rapidly evolving space and unlocking lucrative opportunities for those delivering responsible AI solutions.

Investment-wise, it's never been a better time to bet on companies revolutionizing AI compliance and security, especially those tackling model-specific threats, alongside privacy tech aligning with regulations like GDPR, and anti-bias platforms. These domains aren't merely checking boxes but they're building critical infrastructure for sustainable AI integration, promising significant returns as businesses seek to deploy ethical, compliant AI.

AI Security Solutions

image

Source: Robust Intelligence

Securing AI/ML supply chains is becoming increasingly crucial as reliance on open-source libraries and tools grows. For example, we’re seeing companies like ProtectAI emerge to tackle vulnerabilities and ensure secure developer pipelines from the ground up. This proactive approach aims to mitigate risks before models reach deployment. For model testing and evaluation, the trend is moving towards comprehensive benchmarking platforms that can assess robustness against diverse attack vectors, including advanced examples, data poisoning, and model extraction. Some players in this space include Robust Intelligence and CalypsoAI.

Moreover, the rise of LLMs has brought about a new wave of security concerns, particularly around prompt injections. While companies like Prompt Security and Lakera are building solutions, this domain is still relatively nascent. As more enterprises leverage LLMs, we'll likely see increased investment and innovation to secure these powerful but exploitable systems. Another concern for enterprise adoption of LLMs is data privacy and security.

Privacy and AI

image

Source: Hazy

When it comes to privacy in AI, the focus starts at the data layer - the foundation for models. A few key trends are emerging to preserve privacy while enabling effective training. One approach gaining traction is synthetic data generation, where companies like Gretel, Tonic, Mostly AI, and Hazy are finding techniques to create synthetic datasets that maintain key characteristics of the original data without exposing sensitive information, allowing training on privacy-preserved data proxies.

However, synthetic data isn't always a viable option. In these cases, privacy-preserving ML techniques come into play. Mithril Security leverages confidential computing to enable training directly on sensitive data. Meanwhile, federated learning led by companies like FedML, Flower Labs, and DynamoFL* facilitates collaborative training across disparate, decentralized datasets without exposing raw data - ensuring privacy and regulatory compliance.

Fairness and Anti-bias in AI

image

Source: Fiddler

Ensuring model fairness is another critical area seeing innovation across the AI lifecycle. Startups are now creating solutions for comprehensive bias testing and evaluation of models. Moreso, bias can also creep in over time as models continuously learn and evolve with new data. To combat this, continuous monitoring for bias is essential. Companies like Arize, Mona, Fiddler AI, and Arthur are developing integrated bias monitoring capabilities that plug into broader ML observability platforms. This allows tracking model fairness metrics alongside performance, enabling corrective actions to maintain fairness over a model's lifetime. The combination of pre-deployment bias auditing and continuous production monitoring represents a holistic approach to tackling the complex challenge of AI bias. As deploying equitable AI systems becomes both an ethical and regulatory priority, we can expect increased focus and investment in these anti-bias solutions across the AI lifecycle.

Current Early Stage Trends

To examine the trends at the earliest stages, we can shift our focus to the current YC batch (W24). One of requests for startups by YC this year revolves around Explainable AI - the very gist of what we explored above. For example, we have PromptArmor diving into security and compliance, VectorView and Buster focusing on LLM evaluation, and Relari* and Datacure tackling data privacy and synthetic generation.

*denotes Soma portfolio companies

B2B/SaaSAIMachine Learning

Copyright © 2024 Soma Capital - Soma Capital Management LLC. All rights reserved. - Website Disclaimers