Artificial intelligence another avenue for government to thwart transparency
By Ken Rubin
Treasury Board rules note that departments should categorize just how risky their planned uses of AI are. Those algorithm impact assessments could help the public understand the risks and uses of AI. Under the current broad exemptions of the Access to Information and Privacy Acts, the details of departmental AI use could be denied as being proprietary or matters of national security, law enforcement and government economic interest.
February 17, 2021 - Artificial intelligence (AI) applications in use at federal agencies are rarely publicly known or sufficiently scrutinized. Yet the algorithms in play to either assist or replace the judgment of human decision-makers can make predictions, recommendations, and decisions that can significantly impact individuals and public policy.
One example recently brought to light: the Globe and Mail reported that National Defence used AI companies in its diversity recruitment efforts to shortlist executive candidates without the privacy commissioner being informed. Nor was, under Treasury Board directives, an algorithm impact assessment done. Currently, departments are not required to seek privacy commissioner input on algorithms' impacts or have them submitted for commissioner review or be done alongside privacy impact assessments.
Another example: Communications Security Establishment Canada (CSE) posted a vague statement saying it used “Artificial Intelligence (ITSAP.00.040)”. Upon inquiry, spokesperson Evan Koronewski indicated AI was used “to recognize new or evolving cyber threat signature(s) or pattern(s)... and to help protect Government of Canada (GOC) systems from cyber exploitation”, as well as in its classified mathematics and computer research.
But there is no public listing of the many algorithms and artificial intelligence tools being used by federal agencies.
Ashley Casovan, formerly at Treasury Board and now working as executive director to develop AI certifications at the non-profit company, AI Global, said in interview that federal agencies pushed back at inventorizing their AI uses as too complex to report and “not popular” to do.
Yet on the Procurement Canada website, there is a long list of companies prequalified to do federal AI work like the work done at National Defence. But there are no detailed listings of AI contracts awarded.
And to date, only one algorithm impact assessment has been posted and that was at Treasury Board - reviewing in advance the portal set up for online requesters using AI to search for those applying for government records. Nothing though was said about whether the AI tools could make other uses of data gathered on the applicants. Treasury Board spokesperson Martin Potvin said in an email the consultants (Jumping Elephants and GC Strategies) alone cost taxpayers $225,000 to develop the ATIP online portal.
Treasury Board rules note that departments should categorize just how risky their planned uses of AI are. If too risky, say because of flawed policy calculations or racial biases being in their AI plans, they should be rejected. Those as-yet-not-done algorithm impact assessments, if available, could help the public understand the risks and uses of AI.
The first federal legislative effort to regulate AI use is found in Bill C-11, now in first reading. The bill states on the surface that AI activities, as it applies to the private sector, should be in “plain language” with “explanations” subject to some review by the privacy commissioner. It could mean facing substantial fines for proven violations.
Critics, though, see Bill C-11's prime purpose as making it easier for businesses to collect much more personal data, including by AI means. They do not see sufficient personal data consent and protection or transparency provisions built into Bill C-11.
And reviews of both the Access to Information and Privacy Acts slowly getting underway may incorporate the same vague weak AI regulatory and transparency clauses found in Bill C-11. Under both laws' current broad exemptions, the details of departmental AI use could be denied as being proprietary or matters of national security, law enforcement and government economic interest.
What's ironic too is many human-made decisions in government are now not even recorded, with their being no legal duty to document them.
All too frequently, officials use private emails and personal cell phones to get around making those recorded records accessible. They also know full well that many of their written decision-making records can be redacted or excluded, and are, in any case, not primarily released as full accounts, nor easily retrieved.
Adding then to this now is the necessity to allow the public access to the details of all government algorithms and contracts that help determine government policies and impact personal data. Let's see that happen.
Ken Rubin has championed greater transparency for over five decades and is reachable at kenrubin.ca. He is an investigative public interest researcher, author and Senior Fellow at the Centre for Free Expression.