As more companies adopt generative artificial intelligence (AI), the conversation is no longer about “if” but more about “how.”
Generative AI has enormous potential to transform how we work and communicate—and professionals are already seeing its benefits.
More than three-quarters of workers using generative AI say it makes them better at their job, and 71% report that the technology has transformed how they communicate at work. Company leaders are noticing the results—at least half of those using generative AI say they have seen increased efficiency and higher productivity, while 38% say they have saved costs. [1]
Yet despite widening usage, generative AI’s potential remains largely untapped—58% of workers still wish their companies were more open to it and 52% don’t really know how to use it well. Businesses also face challenges in implementing AI safely and fear the perceived hazards of machine-assisted communication.
“There are a lot of risks with AI—privacy risks, security risks, job displacement risks,” says Rahul Roy-Chowdhury, CEO at Grammarly, an AI-powered writing assistance tool used by people at 96% of the Fortune 500. [2] “All of that requires a responsible view of how we deploy these technologies. But at the same time, I want to make sure we don’t lose the magic.”
Rahul Roy-Chowdhury, CEO at Grammarly
Companies can’t afford to sit on the sidelines and get overtaken by competitors who are using generative AI to supercharge productivity. But they also can’t afford to be careless with implementation.
When not deployed thoughtfully, generative AI can expose companies to unintended consequences such as intellectual theft, fraud and reputational damage. [3] It also presents real risks to human autonomy and safety by perpetuating biases. [4]
of workers wish their companies were more open to AI
This is why building literacy around AI and focusing on education and training among teams are vital. Companies should not only carefully select tools with stringent privacy and security requirements in mind, but also establish clear expectations, usage guidelines and guidance to empower all employees to use them effectively.
“This is a really exciting time, and you're always going to have people on your team who are early adopters. That's how we learn. That's how we get adoption throughout an organisation,” says Shelly Kramer, managing director and principal analyst at theCUBE Research, who advises technology brands. “But we should temper that excitement with a little bit of caution.”
For companies looking to create enthusiasm and conviction about AI adoption, self-improvement is everything. Companies should promote a culture that encourages employees to voice questions or concerns.
“How can we engender trust both internally in what we’re doing with AI and externally with our customers, and scale AI with confidence?” Ms Kramer says. “We need to embrace responsible AI.”
Caption ipsum dolor sit amet, consectetur adipiscing ut labore et dolore magna aliqua. Credit: Name
Shelly Kramer, managing director and principal analyst at theCUBE Research
Caption ipsum dolor sit amet, consectetur adipiscing ut labore et dolore magna aliqua. Credit: Name
Caption ipsum dolor sit amet, consectetur adipiscing ut labore et dolore magna aliqua. Credit: Name
Safely implementing generative AI is essential, but the onus is ultimately on those building generative AI tools to ensure they’re developed ethically and with intention.
Mr Roy-Chowdhury points to AI-powered résumé screening as one recent example of how the technology can cause harm when not developed responsibly. An estimated three-quarters of résumés submitted for US jobs are read by algorithms, which can learn harmful biases from existing data and thereby filter out qualified candidates. This isn’t a theoretical risk; 88% of executives say they know their tools reject qualified candidates. [5]
An intentional commitment to responsible AI and clarity of purpose are necessary to prevent similar outcomes from generative AI and ensure the technology augments—not inhibits—human potential. “We, as technology providers, infuse the right values and the right way to deploy these tools to our users. It’s a deliberate choice,” Mr Roy-Chowdhury says.
Grammarly has long been dedicated to the responsible development and use of its technology. As generative AI ballooned in 2023, Grammarly furthered this focus by sharing its TRUE framework, which guides the company’s product development and can serve as a model for other companies. [6] It consists of four key values:
Following this model, companies should first build trust with customers by prioritising privacy and security, including protecting their rights to control and access their data and taking robust measures to safeguard that data.
They should also ensure responsible development by working to build and improve systems that reduce bias. One way to do this is through forming responsible AI teams and creating policies and procedures that guide their work. These policies should include clearly defined strategies that consider and test for bias and risk issues both before and after products reach customers’ hands.
For user control, it is important to consider how to help customers enhance their work without losing their autonomy. Technology is only as valuable as the people it helps; keeping humans in control and furthering their potential must always stay the focus.
And in terms of empathy, companies should develop with an understanding of the real challenges and needs of their customers—not innovating for novelty or innovation’s sake. This framework can help organisations ensure that they are building toward a responsible future.
Ms Kramer also encourages companies to prioritise transparency throughout all development and adoption.
“Making transparency a primary underpinning of everything that you're doing is how you build and inspire trust within your organisation, with your customers and within the industry,” she says.
Rahul Roy-Chowdhury, CEO at Grammarly
These early days of AI adoption will have an outsized impact on how the technology shapes our working lives. A shared focus on safeguards and values can help to ensure that AI is a tool to enhance, rather than replace, human productivity and communication.
This stance can be powerful. Businesses using Grammarly not only report saving an average of 19 working days per employee per year in productivity, but also better relationships and satisfaction among teams and customers.
“The idea that AI can help you show up better, communicate with more confidence, achieve your outcomes more reliably—that’s a great future,” Mr Roy-Chowdhury says.
REFERENCES
1 Grammarly and the Harris Poll, “The 2024 State of Business Communication: AI’s Potential to Turn Overload Into Impact.”
2 Business Wire, “Grammarly Defies the AI Hype with Significant Business Impact, Deepens AI Support for Enterprises,” 25 October 2023.
3 KPMG, “The flip side of generative AI,” 2023.
4 TechTarget, “7 ways AI could bring more harm than good,” 7 December 2023.
5 The Guardian, “Finding it hard to get a new job? Robot recruiters might be to blame,” 11 May 2022.
6 Grammarly, “A Framework for Industry Responsibility and Accountability in the Age of Generative AI,” 1 May 2023.
How generative AI can unlock business impact, without compromising our humanity