AI in the Global South at the Global SME Finance Forum in Johannesburg
I have always found that innovation comes from conversation not isolation. Yesterday’s panel discussion on minimum viable regulation for AI in the Global South at the Global SME Finance Forum in Johannesburg offered such a moment.
Through the course of the discussion, it was clear that all regulators around the world agree on the basic principles by which generative AI models should be managed: Fairness, Explainability, Risk-Tiering, Monitoring, etc. The variation comes in the regulatory process. When you compare the EU, Canada, Singapore, and others, the primary difference is in the assumption of what is knowable pre-deployment. I believe the basic flaw in the EU AI Act is that it is still based on the notion that you can know how the model will perform before deployment. This has been true of past statistical models, because the boundaries of the application domain are defined and testable. This is not true for most applications of Generative AI, because the application domain is defined in real time by the human users’ creativity.
Instead, I think we should borrow the process used in drug discovery. No drug can be fully known in a laboratory. Humanity is far too diverse to test all possible outcomes. Instead drug testing follows a staged process where the later stages are with humans in real use. We need a similar staged deployment process for General AI models used in any context, so that we can see how the diversity of humanity interacts with them. Each stage of testing may reveal previously unknown risks and provide remediation opportunities before expanding usage and exposing the institution to significant risks.
Key to the success of a staged AI risk-management process is continuous, detailed monitoring and a rapid response to emerging problems with clear fallback procedures. Conveniently, this is already part of the Canadian E-23.
Following the legacy process of a single step to deployment, even with aggressive pre-deployment regulation, cannot remove post-deployment risk for AI systems, because there is no opportunity to discover the unanticipated risks. Trust can only be established through a well-designed, staged approach. A single step process, no matter how rigorous, is prone to failure, and thus will actually reduce trust in the long-term.
Raadhika Sihin
Global SME Finance Forum
Model Risk Managers' International Assocation (MRMIA)
#GSMEFF25
www.deepfuturanalytics.ai