Is your organization pursuing large-scale AI enablement, such as enterprise-wide access to tools like Microsoft Copilot, or are you focusing on high-value use cases within specific functions? Why have you chosen that approach?
Sort by:
The company we’re exploring a partnership with has a solution that combines several large language models and machine learning capabilities—the popular ones—into a single offering. It’s an interesting proposition, and we’re still evaluating whether it will work for us or cause confusion. Access will be strictly controlled, limited to specific service groups or group IDs, and contained within a dedicated environment for testing and validation.
We’re just getting started and are taking a cautious approach. What we’re seeing is that many emerging solutions aren’t tied to a single provider. For example, Microsoft’s tools have evolved significantly, and now platforms like GCP offer Gemini AI as part of their ecosystem.
Personally, I prefer to remain agnostic, as being locked into one provider can limit flexibility. There may be a need for more generic solutions in the future. Sharing experiences and learning from each other will be valuable as we move forward.
We haven’t rolled out Copilot enterprise-wide. Instead, we have a subset of users piloting it, and we’re collecting feedback to determine whether it’s worthwhile for everyone at the company or just for corporate users. We also use Microsoft Fabric to support use cases like sourcing data from different repositories, enabling users to ask natural language questions and gain insights.
Most of our work is quarantined within our network, and we’re implementing checks and balances, such as the Insider tool and browser plugins for Chrome and Safari, to prevent users from uploading sensitive information in prompts to tools like ChatGPT or Claude. We have a draft AI governance policy in place that sets out basic rules, such as not uploading financial information.
Our approach includes Copilot, security tools, data analytics with Fabric, and Copilot embedded within those tools. We’re also using automation tools that come packaged with our COTS products, like JD Edwards and Salesforce. We’re more comfortable rolling out those AI capabilities to the broader team because they’ve been vetted and the risk is managed by the offering company, rather than building our own implementations.
We have deployed ChatGPT Enterprise, but regarding focused, high-value use cases, we recently implemented a tool for company sourcing to support investment opportunities. In this specific case, users input a few metrics and prompts specifying what they’re looking for and what to exclude. The tool generates an initial list, conducts deep research on each entry, and returns a refined list of potential investment opportunities.
This product has successfully passed all proofs of concept and trials and has seen significant use. Beyond general GPT applications, this is a notable example of a specific, high-value use case that has been widely adopted in our organization.