Response to the ‘Proposals Paper for introducing mandatory guardrails for AI in high-risk settings’

A public submission by the Digital Child

Submission made to the Australian Government’s proposals paper, Introducing mandatory guardrails for AI in high-risk settings.

 

Submission prepared by Professor Tama Leaver.

We commend the development of the Mandatory Guardrails approach, and the resourcing and speed at which this process is happening. For the most part, the proposed principles and guardrails do extremely important work and will go a long way toward protecting Australians using AI tools. However, looking through the lens of Australian children, there is some specific language required to delineate the potentially differentiated experience of children using AI tools.

This language should ensure that any transparency in how AI tools operate should be accessible in age-appropriate language, meaning that if, for example, an AI tool is being useful commercially or in educational settings by 8 or 13 year olds, then descriptions of how it operates, what data it collects, and so forth, should be understandable in terms those child end-users can readily understand and base meaningful action on (such as the action to not use the tools). 

Read the submission