Details

  • Perplexity announced the launch of the Secure Intelligence Institute (SII) on March 31, 2026.
  • SII partners with leading teams in cryptography, security, and machine learning to advance research and foster industry collaboration.
  • The institute is led by Dr. Ninghui Li, a professor at Purdue University specializing in security and privacy.
  • Debut paper from SII responds directly to NIST’s request for information on securing autonomous agents.
  • Paper is available on arXiv, marking the institute's first contribution to AI security standards.
  • Initiative builds on growing demand for secure AI systems amid rising autonomous agent deployments.

Impact

Perplexity's Secure Intelligence Institute positions it as a key player in AI safety research, directly engaging NIST's call on autonomous agents to influence emerging standards. Led by Purdue's Dr. Ninghui Li, it fosters collaboration across cryptography, security, and ML, potentially accelerating secure AI development. This pressures rivals like OpenAI and Anthropic, who focus on general capabilities, by prioritizing verifiable safety protocols amid regulatory scrutiny from bodies like NIST. It narrows the gap in specialized security infrastructure, aligning with trends in AI governance and supply chain protections seen in events like SUSHI@NIST.