Which pitfalls—model bias, false positives/negatives, data quality, regulatory constraints—often impede AI-based security tools, and how can they be mitigated in a financial-services context?
Sort by:
Data, especially biased data, is a huge concern. Companies just starting should consider "synthetic data" to test the integrity of their AI. Another pitfall not mentioned is worker bias towards AI, will they use it in the first place?
First and foremost, I would not recommend going down to model level when trying to implement digital security. Use the tools where much of low-level issues you mention are being addressed by tool's product team. Basically, don't build - let others do it right, and use the result of that work.
And for data quality - AI specifically relies on prompts, most of the businesses have data landscape very conductive to contain prompt injections. That must be addressed with great attention, otherwise no matter what AI tools you use, you may get yourself in trouble - the bigger one, the more power such tools have.
I wrote a blog post about this topic:
https://www.ismc.at/?p=76