AI Coaching & Workplace Trust
Earlier this week I met the team at Valence, creators of Nadia, a workplace AI coach. Our conversation explored what it takes to build and maintain trust in such services.
Services like Nadia, an AI coach used by staff at companies like Costa Coffee, Coca-Cola, Prudential, Experian and WPP, are democratising access to workplace coaching, a benefit once reserved for senior leaders (if available at all).
A typical use case is a large retailer makes Nadia available to frontline managers. Store managers often feel isolated, with recent research highlighting the growing strain of dealing with customer conflict. With Nadia, managers can discuss their challenges and concerns with an AI trained in their workplace context.
What determines whether such tools win and retain our trust?
Privacy and control appear to be pre-requisites for adoption. As a frontline manager or team member, I want to know that the content of my coaching conversations and/or usage history are not available to the company, or my line manager.
User experience is another non-negotiable. “Using AI should feel like working with a teammate”, Tom Lawrence of MVPR told me recently (for context, we were discussing his agency’s in-house AI platform). In a coaching scenario, I’d expect any tool to understand my workplace setting and give me useful feedback and actionable advice, such as role playing and personalized development plans. An approachable / empathetic conversational style would encourage me to keep coming back.
Explainability is often discussed in relation to AI, yet it’s unclear whether employees care how a tool works. As long as it is seen as helpful and supportive, staff may be happy to accept that it’s a black box. However, explainability seems likely to come under the spotlight in the event a workplace dispute (or worse) is traced back to advice given by an AI coach.
Of course, the need to establish trust in workplace AI begins with employers, who must balance benefits like employee development, engagement and retention against potential risks such as privacy breaches. A high-profile slip-up could stem uptake or even cause reputational damage.
Commercial considerations are another factor. While staff want to know that personal information is not shared with their employer, employers need to justify the investment in AI tools and measure ROI. Providing aggregated and anonymized metrics (and potentially, organisations publicising these internally) should help build and maintain trust on both sides.
Linked to this, a degree of transparency about model training, performance monitoring and safety controls should also enhance trust.
The bigger picture of how AI is used across an organization is also likely to come into play. For example, use of AI in performance reviews (even if informal, for example via line managers using ChatGPT) could lead to a sense of loss of agency and undermine confidence in AI coaching.
My conversations with the Valence team revealed a thoughtful approach to these and other questions. Their impressive client roster and uptake metrics (plus a successful $50 million raise) suggest they’re on the right track.
AI coaching & workplace trust: key takeaways
Privacy guarantees are non-negotiable. Uptake of AI workplace coaching and similar tools is likely to depend on employees being offered explicit, strong data protections
Trust will be context-specific: People may trust AI in developmental or advisory roles, but are likely to be less keen about being evaluated by AI
Demand for transparency and explainability could grow, particularly in the event of disputes linked to AI generated advice, or if a debate emerges about the influence of AI coaching on performance and/or workplace relationships