As OpenAI, an artificial intelligence company, looks to enhance its policies and reporting systems, a scholar from the University of British Columbia has raised concerns about the potential for wrongful flagging of users. Professor Kevin Leyton-Brown, who specializes in computer science, argues that in the pursuit of identifying problematic behavior, many innocent users could be mistakenly flagged.
Professor Leyton-Brown points to past incidents involving incorrect flagging, such as attempts by companies to automatically report child pornography. He highlights cases where innocent parents, who took pictures of their children in innocent situations, were ensnared by such automated systems. He also cites no-fly lists that have caused significant distress for individuals who have struggled to have their names cleared. “Any kind of system like that is going to have false positives,” Leyton-Brown asserts.
This renewed focus on safeguarding measures arises following directives from the federal Artificial Intelligence Minister, Evan Solomon, asking OpenAI to bolster its protections after the Tumbler Ridge mass shooting. OpenAI has faced scrutiny for not promptly reporting the activities of shooter Jesse Van Rootselaar on ChatGPT to law enforcement prior to the incident.
In addition, OpenAI has been tasked with reviewing previously flagged interactions to ensure they are accurately reported to the Royal Canadian Mounted Police (RCMP). Leyton-Brown emphasizes the need for companies intending to identify problematic behavior on their platforms to develop separate, well-thought-out systems. He explains that any detection system is bound to be imperfect and will include a threshold to determine whether a user is roleplaying, discussing a fantasy, or possibly expressing real intent.
“When you’re speaking to a psychiatrist or another human being, they are forming an opinion about what you are saying as you are having the conversation. AI systems are not like this. They are just literally having the conversation,” Leyton-Brown elaborates. This distinction highlights the limitations of AI in understanding context and nuances in user interactions.
Moreover, Leyton-Brown stresses the importance of having discussions about AI regulation. He believes society should not leave the governance of AI technology solely in the hands of private companies. He notes that OpenAI, or similar companies, could monitor conversations and decide to take action if certain lines are crossed. The key concern remains how such processes should be implemented, what privacy expectations users should have, and the appropriate responses to flagged interactions.
Looking ahead, Leyton-Brown anticipates ongoing discussions concerning AI regulation in the coming months. He also predicts that some level of governmental regulation will likely emerge. B.C. Premier David Eby has indicated that OpenAI will collaborate with the provincial government to advocate for national legislative standards governing the reporting of problematic user interactions related to AI technologies.




