/markets

News and resources on capital markets, exchanges, trade execution and post-trade settlement.

Discussion
AI should be trained to respect a regulatory 'constitution' says BofE policy maker
Jamie French

Jamie French

  Oh dear, I dont think Mr Kroszner understands how these models work
“Nobody gets fired for buying IBM”
Ketharaman Swaminathan

Ketharaman Swaminathan

  There were tons of established, reliable tech vendors in the era when the phrase "Nobody gets fired for buying IBM" was coined. From what I know, the backstory for this phrase is not tech chops but integrity, which was unique to IBM.  IBM never bribed customers to win deals, unlike most of its competitors. Tech projects fail for a lot of reasons such as change management challenges, poor data quality, and so on, for which the customer company is to blame – not CIO or tech vendor. When customer bought from somebody other than IBM and the project failed, aspersion would be cast immediately on the CIO’s integrity, it would be taken for granted that the CIO took a bribe, and s/he would be fired without any due process – such as post-mortem / inquest – to investigate into other reasons for project failure. Whereas, when customer bought from IBM and the project failed, corruption was completely ruled out, CIO got the benefit of a “fair trial”, inevitably other reasons would be found for project failure that were unrelated to the CIO, and CIO got to keep his or her job.
AI should be trained to respect a regulatory 'constitution' says BofE policy maker
John Davies

John Davies

  It's a lovely idea but simply not feasible or even technically possible. It's like putting in back-doors for encryption, it's just not mathematically possible without fundamentally breaking cryptography. Firstly the LLMs or GPTs being referenced are global, so who's regulations to you build into the model? Secondly LLMs don't follow rules like that, they can be tuned towards a direction but like people they find loopholes and unless the rule is rock solid and unambiguous, which they rarely are, they simply won't work on LLMs. Censored LLMs, which is effectively what this is suggesting work badly, you're training it and then un-training it. We saw what happened with some of the recently censored models proposing black German soldiers in 1943, Native Indian founding fathers of the USA, all in the admirable effort of inclusion but failing. What is needed is better management of LLMs, UK and EU should be using privately hosted LLMs and then frameworks around those to assert compliance and adherence to regulatory practices. This is a hybrid of LLMs, RAG and traditional integration. It can be done but not in the way suggested here.
Climate Scorpion: The Sting is in the Tail - Introducing Planetary Insolvency
Mark Sibthorpe

Mark Sibthorpe

  Good grief! Are these really the people in charge of making decisions related to the future of our planet.