A few years ago, Karine Mellata and Michael Lin met while working on Apple’s fraud engineering and algorithmic risk team. Both engineers, Mellata and Lin, were involved in helping solve online abuse problems, including spam, botting, account security, and developer fraud, for Apple’s growing customer base.
Despite efforts to develop new models to keep pace with evolving patterns of exploitation, Mellata and Lin felt left behind and were stuck rebuilding key elements of their trust and security infrastructure.
“As regulations place greater scrutiny on teams to centralize ad-hoc trust and security responses, we see a real opportunity for us to help modernize this industry and build a safer internet for everyone,” Mellata told TechCrunch in an email interview. .” “We imagine a system that can magically adapt as quickly as the abuse itself.”
Intrinsic’s platform is designed to moderate both user- and AI-generated content and provide the infrastructure to enable customers (mostly social media companies and e-commerce marketplaces) to detect and take action on content that violates their policies. Intrinsic focuses on security product integration; It automatically organizes tasks like banning users and flagging content for review.
“Intrinsic is a fully customizable AI content moderation platform,” Mellata said. “For example, Intrinsic can help a publishing company that produces marketing materials avoid giving legally liable financial advice. “Or we can help marketplaces identify listings like brass knuckles that are illegal in California but not in Texas.”
Mellata argues that there are no off-the-shelf classifiers for such nuanced categories, and that even a well-resourced trust and security team would need several weeks, or even months, of engineering time to add new automatic detection categories. -house.
When asked about competing platforms like Spectrum Labs, Azure, and Cinder (which is an almost direct competitor), Mellata says he thinks Intrinsic stands out from the rest with (1) its explainability and (2) greatly expanded tools. He explained that Intrinsic provides the ability to “ask” about mistakes customers make in their content moderation decisions and provides an explanation of why. The platform also hosts manual review and tagging tools that allow customers to fine-tune moderation models on their own data.
“Most traditional trust and safety solutions are inflexible and not designed to evolve with abuse,” Mellata said. “Now more than ever, resource-constrained trust and security teams are seeking vendor assistance and looking for ways to reduce audit costs while maintaining high security standards.”
Intrinsic’s near-term plans are to expand the size of its three-person team and expand its moderation technology to include video and audio, not just text and images.
“The broader slowdown in technology is driving greater interest in automation for trust and safety, which puts Intrinsic in a unique position,” Mellata said. “COOs care about reducing costs. Compliance officers attach importance to risk reduction. Intrinsic helps with both. “We are cheaper, faster, and catch much more abuse than existing vendors or equivalent on-premises solutions.”