CMU Dean Ramayya Krishnan testifies on need for transparency in AI

During his testimony, Krishnan proposed four key recommendations to Congress aimed at fostering the responsible adoption of AI.

Mehak Luthra

Ramayya Krishnan / Image - Carnegie Mellon University

Earlier this month, a Senate subcommittee on consumer protection, product safety, and data security convened in Washington, D.C., with a primary focus on addressing the challenges related to artificial intelligence (AI). Ramayya Krishnan, who currently serves as the dean of Carnegie Mellon University's Heinz College of Information Systems and Public Policy and is the founding faculty director of The Block Center for Technology and Society, provided testimony during the session.

In his testimony, Krishnan underscored the vital importance of enhancing accountability and transparency throughout the entire lifecycle of AI, from its development to its deployment. He emphasized that responsible adoption and utilization of AI technologies are essential to address the challenges and promote the safe and responsible use of AI systems.

“As AI technologies are considered for use in high-stakes applications such as autonomous vehicles, health care, recruiting and criminal justice, the unwillingness of the leading vendors to disclose the attributes and provenance of the data they have used to train and tune their models and the processes they have employed for model training and alignment to minimize the risk of toxic or harmful responses needs to be urgently addressed,” Krishnan said.

To foster the adoption of responsible AI, Krishnan proposed four key recommendations to Congress. Firstly, he suggested that Congress mandate the use of the National Institute of Standards and Technology (NIST) AI Risk Management Framework by all federal agencies throughout the AI lifecycle, from design to management.

Secondly, he recommended the establishment of standardized documentation for the AI pipeline—comprising training data, models, and applications—verifiable by trusted third parties, akin to financial statement audits.

His third recommendation addressed content labeling and detection standards, given the growing capability of generative AI to create human-like content. Lastly, Krishnan proposed investing in an AI trust infrastructure, similar to the Computer Emergency Response Team (CERT) for cybersecurity, to connect vendors, catalog incidents, record vulnerabilities, test and verify models, and disseminate best practices.