As a reaction to the US government's AI policies and executive orders, have you audited the datasets used to train your AI models? Or do you plan to? Are there specific practices you have put in place to remove targeted "bias" or "social agendas"?
Sort by:
Biases will always exist. AI models should be trained to provide a spectrum of realistic options and not be the decision-makers. The ultimate decision must be made by a human.
One of the big pushes over the last couple of years from vendors who have the large LLMs everyone uses for public searches was to root out biases in the dataset.
If you are referring to your own private dataset, it should have been built from the start to mitigate any biases; if not, it's been feeding you inaccurate data.
The huge investment required to create a dataset worthy of being used for AI or analytics behooves objectivity: this means ignoring pressures from both the anti-DEI and overly woke forces to corrupt the facts.
If you are not working with facts, why even bother to have data? Just go ahead and pull things out of the air, like certain leaders in the White House do on an hourly basis.
The ubiquity of AI is driving our company to normalize AI use more towards "user beware" and away from "structured review of all AI". Our AI use training for employees is meant to provide the critical thinking skills to have a ready answer if legal or compliance starts asking questions about AI use for company business.