I suggest this because it would be important how they instruct the AI models to be fair when making decisions. This is only valid when a generic LLM is used. But if it’s trained specifically for this purpose and the weights have bias against the customer, then prompt transparency wouldn’t help. We would need full model and training transparency and researchers who go through it all to see what’s up.
What I’m saying is, AI can be used to bias in favor of the credit card company in an even more ruthless way than current techniques.
If “AI” just means “if this then that” and not LLMs, then my suggestion is not relevant.
They should be required to publish the system prompts and any other prompting used.
In EU they are, why?
Oh really?
I suggest this because it would be important how they instruct the AI models to be fair when making decisions. This is only valid when a generic LLM is used. But if it’s trained specifically for this purpose and the weights have bias against the customer, then prompt transparency wouldn’t help. We would need full model and training transparency and researchers who go through it all to see what’s up.
What I’m saying is, AI can be used to bias in favor of the credit card company in an even more ruthless way than current techniques.
If “AI” just means “if this then that” and not LLMs, then my suggestion is not relevant.
That’s exactly the reasoning EU had when they made it into law :)