This article was first published by Digiday sibling WorkLife
Zoom has made waves this week: first for its RTO mandates, second for a potential AI-related data privacy oversight.
Like many other companies this year, Zoom has upped its AI capabilities. In March, it updated its policies to allow broad access to user data to train its AI models. That change drew intense scrutiny when it was reported over the weekend, sparking questions and alarm from its customers and data privacy advocates.
The big question is: Should we be able to opt out of having our data used to train generative AI systems? While AI needs data to be trained and become smarter, what data should be used for that?
Zoom’s new AI features
The company launched some of its AI-powered features earlier this year, which lets clients summarize meetings without having to record an entire session. It’s something that a lot of workplace tools have been doing, Otter AI’s OtterPilot, which is a smart AI meeting assistant that can join meetings on Zoom, Google Meet or Microsoft Teams.
Zoom’s features include Zoom IQ Meeting Summary and Zoom IQ Team Chat Compose, which are offered on a free trial basis. Zoom account owners and administrators control whether to enable these AI features for their accounts.
Its utility is compelling. For example, say a team member joins their Zoom meeting late, they can ask Zoom IQ to summarize what they’ve missed in real time and ask further questions. Or if they need to create a whiteboard session for their meeting, Zoom IQ can generate it based on text prompts.
Generative AI’s ability to help speed up tasks and create better work efficiency, is clear. But ensuring it is used ethically and responsibly by both organizations and individuals is less clear cut. And that’s exactly what Zoom got grilled for.
“There are a lot of positive benefits of integrating AI with their [Zoom] platform, but not at the expense of consumer privacy,” said Jeff Pedowitz, an AI expert and author.
To read the full article click here