Let’s ‘Double-Click’ on the Latest Cringeworthy Corporate Buzzword
You may want to examine or delve into the phrase, which has become pervasive in conference calls and grates on many; ‘It’s almost like a joke’
You may want to examine or delve into the phrase, which has become pervasive in conference calls and grates on many; ‘It’s almost like a joke’
Ruben Roy isn’t a guy who tends to beat himself up, but he’s still chagrined about what he said on an earnings call last month.
A managing director at Stifel Financial , Roy dialled in to hear the chief executive of a healthcare company discuss its latest results. During the Q&A, Roy asked the speaker to elaborate on his remarks about investment opportunities.
“I wanted to double-click a bit on some of the commentary you had,” Roy said, instantly cringing.
One of the fastest-spreading corporate buzzwords in recent years, “double-click” is both polarising and pervasive. Particularly on Wall Street, the figure of speech is now being used as a shorthand for examining something more fully, akin to double-clicking to see a computer folder’s contents. Some, like Roy, find the idiom obnoxious or twee. Double-click defenders say the phrase encourages deeper thinking.
Either way, it’s become a verbal tic du jour. Executives and analysts dropped double-click 644 times in corporate conference calls and events during the first half of the year, according to VIQ Solutions, up from 139 times in the same period of 2020.
“It’s almost like a joke. People are like, oh here we go with double-click,” says Roy, who’d been trying to avoid using the term when he accidentally let it slip. Colleagues, he says, haven’t let him forget it.
Annie Mosbacher, a Los Angeles-based marketer, recalls snapping to attention last year when she heard an executive use the phrase during a strategy meeting. Afterward, she and colleagues discussed it: “It was like, oh my gosh, double-click? I guess this is a thing now?”
The new jargon makes her roll her eyes. “Can’t we just say ‘this is an area we need to focus on?’” she says. “We regurgitate this sort of lingo as though it means something, and usually it’s about trying to be impressive more than anything else.”
Not so, says Ruben Linder, who’s owned a small audio and video production business in San Antonio for 25 years. These days, with the rise of technology and a more hectic corporate life, Linder says people need reminders to stop and examine what matters—to double-click, if you will.
“The term is simple, but it’s really profound,” he says. He tries to carve out time to go to a cafe twice monthly with a notebook and engage in reflection.
“I’ll double-click on my business, double-click on my life,” he says. “I double-click on everything now.”
Double-click lingo has leapfrogged beyond corporate America. While CEOs including Walmart’s Douglas McMillon and Nvidia ’s Jensen Huang have deployed the term, so, too, have congressional representatives, influencers and authors such as parenting guru Dr. Becky Kennedy.
The phrase is “innovative,” says Beth DelGiacco, a vice president of corporate communications at biotech company Argenx , who praises its efficiency.
“It’s only a few syllables. Everyone knows what you mean when you say it,” says DelGiacco, who regularly trots it out with peers.
Tech-inflected buzzwords are especially apt to gain traction—think “network,” “bandwidth” or “take offline”—because they can sound smart or cutting-edge, says Doug Guilbeault, an assistant professor at UC Berkeley’s Haas School of Business who has studied corporate jargon.
The inventor of the literal double-click, former Apple designer Bill Atkinson, isn’t convinced. Reached while boating on a recent weekday, Atkinson, now retired, says he’s never heard anyone use double-click as a metaphor and would steer clear of such usage himself, preferring more straightforward language.
He adds that since inventing the function in 1979, he’s come to regret it. He now thinks an extra “Shift” button on the mouse would have been more user-friendly.
“The double-click was a mistake,” says Atkinson, who left tech in 1995 to pursue nature photography. Personally, he double-clicks less frequently these days, given the rise of mouseless devices like tablets and smartphones.
“I double-tap, or I tap,” he says. “I long-press.”
Buzzwords tend to come and go, says HR consultant Nancy Settle-Murphy, noting that other tech-inspired jargon, such as “RTFM”—or read the f—ing manual—are less commonly used today than they once were.
“There are fewer manuals now,” says Settle-Murphy, who recently installed a video doorbell at her home and notes it didn’t come with any pictures or diagrams.
Corporate jargon can be alienating. At a conference, Settle-Murphy was thrown when an audience member asked the speaker to double-click on a point they’d made.
“I thought, ‘these are slides, there’s no link, how can they double-click?’” she says, admitting she later searched online to find the new meaning.
Double-click has a long pedigree in the sales world. Matt Sunshine, head of the Center for Sales Strategy, which trains salespeople, says when he sold ad spots for a local radio station in Dallas in the 1990s, peers commonly used the term.
“Sales leaders would say, ‘Hey, you need to make sure you double-click on that’ with your prospects,” Sunshine says, meaning delve more deeply into any issues customers might raise, as in “Tell me more.”
While he doesn’t know exactly when it first took off, he says the phrase neatly encapsulates a core principle in effective sales strategy, in which salespeople seek to identify and address customers’ needs and concerns, instead of defaulting to one-size-fits-all pitches.
Double-clicking can help identify new business prospects, says Scott Bond, vice president of consumer services at Canadian real-estate company Rennie, which recently opened a U.S. location in Seattle.
Not long ago, Bond was on a Zoom call with his boss and some new business contacts based in southern California. The group hit it off, and afterward, Bond found himself mulling possibilities.
“I looked at my boss and said, hold on, I think we’re being presented with an opportunity here,” he says. “Why don’t we dive in and learn a little more?” His boss agreed, and the company is now planning to open its second American location in the Palm Springs area.
“We double-clicked,” he says.
This stylish family home combines a classic palette and finishes with a flexible floorplan
Just 55 minutes from Sydney, make this your creative getaway located in the majestic Hawkesbury region.
Geoffrey Hinton hopes the prize will add credibility to his claims about the dangers of AI technology he pioneered
The newly minted Nobel laureate Geoffrey Hinton has a message about the artificial-intelligence systems he helped create: get more serious about safety or they could endanger humanity.
“I think we’re at a kind of bifurcation point in history where, in the next few years, we need to figure out if there’s a way to deal with that threat,” Hinton said in an interview Tuesday with a Nobel Prize official that mixed pride in his life’s work with warnings about the growing danger it poses.
The 76-year-old Hinton resigned from Google last year in part so he could talk more about the possibility that AI systems could escape human control and influence elections or power dangerous robots. Along with other experienced AI researchers, he has called on such companies as OpenAI, Meta Platforms and Alphabet -owned Google to devote more resources to the safety of the advanced systems that they are competing against each other to develop as quickly as possible.
Hinton’s Nobel win has provided a new platform for his doomsday warnings at the same time it celebrates his critical role in advancing the technologies fueling them. Hinton has argued that advanced AI systems are capable of understanding their outputs, a controversial view in research circles.
“Hopefully, it will make me more credible when I say these things really do understand what they’re saying,” he said of the prize.
Hinton’s views have pitted him against factions of the AI community that believe dwelling on doomsday scenarios needlessly slows technological progress or distracts from more immediate harms, such as discrimination against minority groups .
“I think that he’s a smart guy, but I think a lot of people have way overhyped the risk of these things, and that’s really convinced a lot of the general public that this is what we should be focusing on, not the more immediate harms of AI,” said Melanie Mitchell, a professor at the Santa Fe Institute, during a panel last year.
Hinton visited Google’s Silicon Valley headquarters Tuesday for an informal celebration, and some of the company’s top AI executives congratulated him on social media.
On Wednesday, other prominent Googlers specialising in AI were also awarded a Nobel Prize. Demis Hassabis, chief executive of Google DeepMind, and John M. Jumper, director at the AI lab, were part of a group of three scientists who won the chemistry prize for their work on predicting the shape of proteins.
Hinton is sharing the Nobel Prize in physics with John Hopfield of Princeton University for their work since the 1980s on neural networks that process information in ways inspired by the human brain. That work is the basis for many of the AI technologies in use today, from ChatGPT’s humanlike conversations to Google Photos’ ability to recognise who is in every picture you take.
“Their contributions to connect fundamental concepts in physics with concepts in biology, not just AI—these concepts are still with us today,” said Yoshua Bengio , an AI researcher at the University of Montreal.
In 2012, Hinton worked with two of his University of Toronto graduate students, Alex Krizhevsky and Ilya Sutskever, on a neural network called AlexNet programmed to recognise images in photos. Until that point, computer algorithms had often been unable to tell that a picture of a dog was really a dog and not a cat or a car.
AlexNet’s blowout victory at a 2012 contest for image-recognition technology was a pivotal moment in the development of the modern AI boom, as it proved the power of neural nets over other approaches.
That same year, Hinton started a company with Krizhevsky and Sutskever that turned out to be short-lived. Google acquired it in 2013 in an auction against competitors including Baidu and Microsoft, paying $44 million essentially to hire the three men, according to the book “Genius Makers.” Hinton began splitting time between the University of Toronto and Google, where he continued research on neural networks.
Hinton is widely revered as a mentor for the current generation of top AI researchers including Sutskever, who co-founded OpenAI before leaving this spring to start a company called Safe Superintelligence.
Hinton received the 2018 Turing Award, a computer-science prize, for his work on neural networks alongside Bengio and a fellow AI researcher, Yann LeCun . The three are often referred to as the modern “godfathers of AI.”
By 2023, Hinton had become alarmed about the consequences of building more powerful artificial intelligence. He began talking about the possibility that AI systems could escape the control of their creators and cause catastrophic harm to humanity. In doing so, he aligned himself with a vocal movement of people concerned about the existential risks of the technology.
“We’re in a situation that most people can’t even conceive of, which is that these digital intelligences are going to be a lot smarter than us, and if they want to get stuff done, they’re going to want to take control,” Hinton said in an interview last year.
Hinton announced he was leaving Google in spring 2023, saying he wanted to be able to freely discuss the dangers of AI without worrying about consequences for the company. Google had acted “very responsibly,” he said in an X post.
In the subsequent months, Hinton has spent much of his time speaking to policymakers and tech executives, including Elon Musk , about AI risks.
Hinton cosigned a paper last year saying companies doing AI work should allocate at least one-third of their research and development resources to ensuring the safety and ethical use of their systems.
“One thing governments can do is force the big companies to spend a lot more of their resources on safety research, so that for example companies like OpenAI can’t just put safety research on the back burner,” Hinton said in the Nobel interview.
An OpenAI spokeswoman said the company is proud of its safety work.
With Bengio and other researchers, Hinton supported an artificial-intelligence safety bill passed by the California Legislature this summer that would have required developers of large AI systems to take a number of steps to ensure they can’t cause catastrophic damage. Gov. Gavin Newsom recently vetoed the bill , which was opposed by most big tech companies including Google.
Hinton’s increased activism has put him in opposition to other respected researchers who believe his warnings are fantastical because AI is far from having the capability to cause serious harm.
“Their complete lack of understanding of the physical world and lack of planning abilities put them way below cat-level intelligence, never mind human-level,” LeCun wrote in a response to Hinton on X last year.
This stylish family home combines a classic palette and finishes with a flexible floorplan
Just 55 minutes from Sydney, make this your creative getaway located in the majestic Hawkesbury region.