The Commission used data on the accuracy and precision of AI tools to detect child sexual abuse materials (CSAM) on-line solely from Meta and one other tech firm, an entry for paperwork request filed by former MEP Felix Reda confirmed.
Independent analysis, management assessments or additional particulars on the underlying datasets of the tech corporations have been disregarded within the affect assessment for the proposal of the CSAM regulation.
The EU proposal contains the chance of issuing courtroom orders requesting the scanning of encrypted communications to detect CSAM, triggering issues this might open the door to large quantities of false positives of non-public content material being disclosed to regulation enforcement.
To deal with this criticism, the European Commissioner for Home Affairs Ylva Johansson, who’s main on the file, printed a weblog publish on 7 August stating that for the detection of new child sexual abuse materials, there’s expertise obtainable with a precision fee of 99.9%.
The following day, Felix Reda, a former EU lawmaker who’s now working for the non-profit organisation Gesellschaft für Freiheitsrechte, despatched a Freedom of Information (FOI) request to the EU government department, asking for the sources of this precision fee.
In its response by way of AskTheEU, which was delayed in weeks even after the deadline extension for distinctive circumstances, the Commission cited Meta and Thorn’s industrial product Safer because the suppliers of the data. According to Thorn, “newest assessments” of Safer’s classification mannequin present a 99% precision fee.
“It is just not significant on a technical stage for a corporation to say it has run a check and state a sure accuracy when the precise check data will not be identified,” Reda instructed EURACTIV.
In the open web, it may very well be that the quantity of falsely detected content material is considerably bigger as a result of the proportion of CSAM content material on all messenger providers might be very small, Reda continued.
The quantity of false positives is related as a result of innocent messages, chats and pictures of harmless individuals containing specific content material might find yourself on the screens of investigators.
Thorn didn’t present EURACTIV with particulars on the datasets and strategies for his or her assessments in time for publication.
Further, the Commission ought to have carried out assessments itself as an alternative of relying on corporations which may find yourself profiting themselves as suppliers of chat management software program sooner or later, Reda argues.
The Commission offered its proposal to stop and fight CSAM in May 2022, sparking backlash attributable to its suggestion of a generalised scanning obligation for messaging providers and the chance of breaking apart end-to-end encryption, as EURACTIV reported.
In July, the European Data Protection Board and the European Data Protection Supervisor adopted a joint opinion, saying that the proposal in its present type might current extra dangers to third-party people than to the criminals they intend to pursue for CSAM.
Next Monday, Commissioner Johansson will current the proposal to the Committee on Civil Liberties, Justice and Home Affairs (LIBE), which is able to begin negotiations in Parliament. The Council is already working on the textual content however will not be but coping with the controversial filtering obligations.
The Commission didn’t instantly reply to EURACTIV’s inquiry to remark.
[Edited by Luca Bertuzzi/Nathalie Weatherald]
https://www.euractiv.com/section/digital/news/eu-assessment-of-child-abuse-detection-tools-based-on-industry-data/