Which pitfalls—model bias, false positives/negatives, data quality, regulatory constraints—often impede AI-based security tools, and how can they be mitigated in a financial-services context?

3.6k viewscircle icon4 Upvotescircle icon3 Comments
Sort by:
Director of Information Security in Finance (non-banking)21 hours ago

I wrote a blog post about this topic:
https://www.ismc.at/?p=76

Director10 days ago

Data, especially biased data, is a huge concern. Companies just starting should consider "synthetic data" to test the integrity of their AI. Another pitfall not mentioned is worker bias towards AI, will they use it in the first place?

Lightbulb on1
VP of Engineering11 days ago

First and foremost, I would not recommend going down to model level when trying to implement digital security. Use the tools where much of low-level issues you mention are being addressed by tool's product team. Basically, don't build - let others do it right, and use the result of that work.

And for data quality - AI specifically relies on prompts, most of the businesses have data landscape very conductive to contain prompt injections. That must be addressed with great attention, otherwise no matter what AI tools you use, you may get yourself in trouble - the bigger one, the more power such tools have.

Lightbulb on1

Content you might like

Semantics40%

Necessary55%

Neither5%

View Results

Agree75%

Disagree22%

Undecided3%

View Results