Efforts to Rein In AI Tap Lesson From Social Media: Don’t Wait Until It’s Too Late
Activists and officials race to shape rules and public understanding of new artificial intelligence tools
Activists and officials race to shape rules and public understanding of new artificial intelligence tools
Social media was more than a decade old before efforts to curb its ill effects began in earnest. With artificial intelligence, lawmakers, activists and executives aren’t waiting that long.
Over the past several months, award-winning scientists, White House officials and tech CEOs have called for guardrails around generative AI tools such as ChatGPT—the chatbot launched last year by Microsoft-backed startup OpenAI. Among those at the table are many veterans of the continuing battle to make social media safer.
Those advocates view the AI debate as a fresh chance to influence how companies make and market their products and to shape public expectations of the technology. They aim to move faster to shape the AI landscape and learn from errors in the fight over social media.
“We missed the window on social media,” said Jim Steyer, chief executive of Common Sense Media, a child internet-safety organisation that has for years criticised social-media platforms over issues including privacy and harmful content. “It was late—very late—and the ground rules had already been set and industry just did whatever it wanted to do.”
Activists and executives alike are pushing out a range of projects and proposals to shape understanding and regulation to address issues including AI’s potential for manipulation, misinformation and bias.
Common Sense is developing an independent AI ratings and reviews system that will assess AI products such as ChatGPT on their handling of private data, suitability for children and other factors. The nonprofit plans to launch the system this fall and spend between $5 million and $10 million a year on top of its $25 million budget to fund the project.
Other internet advocacy groups including the Mozilla Foundation are also building their own open-source AI tools and investing in startups that say they are building responsible AI systems. Some firms initially focused on social media are now trying to sell services to AI companies to help their chatbots avoid churning out misinformation and other harmful content.
Tech companies are racing to influence regulation, discussing it with global governments that are both wary of AI and eager to capitalise on its opportunities. In early May, President Biden met with the chief executives of companies including OpenAI, Microsoft and Google at the White House. OpenAI CEO Sam Altman has spent weeks meeting with lawmakers and other leaders globally to discuss AI’s risks and his company’s idea of safe regulation.
Altman and Microsoft President Brad Smith have both argued for a new regulatory agency that would license large AI systems. Tesla CEO Elon Musk, who on Wednesday announced the official launch of his new AI startup, said in May that the government should convene an independent oversight committee, potentially including industry executives, to create rules that ensure AI is developed safely.
The Federal Trade Commission also is taking a hard look at AI. It is investigating whether OpenAI has “engaged in unfair or deceptive practices” stemming from false information published by ChatGPT, according to a civil subpoena made public this past week. Altman said OpenAI is confident that it follows the law and “of course we will work with the FTC.”
Looming large over all this activity is the growing feeling among many activists and lawmakers that years of efforts to regulate or otherwise change social-media companies including Facebook parent Meta Platforms, Twitter and TikTok were unsatisfactory. Facebook was founded in 2004 and Twitter in 2006, but widespread discussion about regulation didn’t really take off until after discoveries of Russian interference and other issues in the 2016 U.S. election.
“Congress failed to meet the moment on social media,” Democratic Sen. Richard Blumenthal said during a congressional hearing on AI in May. “Now we have the obligation to do it on AI before the threats and the risks become real.”
Though social-media executives in recent years called for more regulation, no new U.S. federal laws have been set that require companies to protect users’ privacy and data or that update the nearly three-decade-old rules for how platforms police content. In part that is because of disagreements among lawmakers over whether companies should do more to moderate what is said on their platforms or whether they already have overstepped into stifling free speech.
Some of the activists who are veterans of those battles say two major lessons from this era are that the companies can’t be trusted to self-regulate and that the federal government is too gridlocked to pass meaningful legislation. “There’s a massive void,” Steyer of Common Sense Media said.
Yet he and others say they are encouraged by the willingness of AI companies to discuss major issues.
“We’re seeing some of the people from trust and safety teams from social media are now at AI companies,” said L. Gordon Crovitz, co-founder of NewsGuard, a company that tracks and rates news sites. Crovitz, former publisher of The Wall Street Journal, says these people seem much more empowered in their current roles. “The body language is ‘we’ve been freed.’”
Large language models such as GPT-4 are trained on anything that can be scraped from the internet, but the data contain large chunks of hate speech, misinformation and other harmful content. So these models are further refined after their initial training to weed out some of that bad content in a process called fine-tuning.
NewsGuard has been talking to AI companies about licensing its data—which Crovitz calls a “catalog of all the important false narratives that are out there”—for fine-tuning and to bolster AI models’ guardrails against producing just those types of misinformation and false narratives.
Ravi Iyer, a former product manager for Meta, is now at the University of Southern California’s Marshall School of Business and developing a poll that tracks how people experience AI systems. He hopes the poll will influence how AI companies design and deploy their products.
“We need to know that’s a choice platforms can make and reward them for not making the wrong choices,” Iyer said.
The Mozilla Foundation, a nonprofit that builds the Firefox internet browser, said it is building open-sourced models as alternatives to large private AI models. “We need to build alternatives and not just advocate for them,” Mark Surman, Mozilla’s president, said.
Steyer described the AI ratings system being built at Common Sense as the most ambitious in the nonprofit’s history. Tracy Pizzo Frey, a consultant who previously worked for Google and is helping craft the system, said there is no set way to evaluate the safety of AI tools.
So far, Common Sense is looking at seven factors, including how transparent companies are about what their systems can do and where they still have shortcomings. The nonprofit may factor in how much information companies provide about their training data, which companies including OpenAI view as competitive secrets.
Frey said Common Sense won’t ask for proprietary data but needs information that helps parents and educators make informed decisions about the use of AI. “There are no rules around what transparency looks like,” Frey said.
Travellers are swapping traditional sightseeing for immersive experiences, with Africa emerging as a must-visit destination.
Wealthy Aussies are swapping large family homes for high-end apartments, with sales of prestige units tripling over the past decade.
Quantum computing is moving from theory to real-world investment. Professor David Reilly says it could reshape finance, security and global technology infrastructure.
For decades, the world’s computing power has quietly expanded at an astonishing pace.
From the first transistor developed at Bell Labs in 1947 to modern processors containing billions and even trillions of transistors, each generation of technology has been faster, smaller and more powerful than the last.
But according to quantum physicist and technology entrepreneur David Reilly, that era of effortless progress is beginning to slow.
Reilly, CEO of Sydney-based Emergence Quantum and Professor of Physics at the University of Sydney, says the computing infrastructure underpinning modern economies is approaching fundamental physical limits.
And that could have enormous implications for finance, artificial intelligence and global investment.
Speaking at an industry event organised by Kanebridge International, Reilly said many critical parts of modern society depend on computing and the infrastructure used to process information.
For years, the technology industry relied on a steady improvement known as Moore’s Law, where the number of transistors on a chip doubled roughly every two years.
More transistors meant more computing power, allowing faster software, smarter devices and ever-larger data systems.
Today, however, those gains are slowing.
“It feels to me very innate that I’m going to just find that next year there’s going to be another breakthrough,” Reilly said.
“But if you look at the data…there’s a slowing down, a roll off in performance that started some 10, 20 years ago.”
Rather than making chips dramatically faster, manufacturers are now largely increasing computing capacity by packing more transistors onto each processor.
The approach works, but it comes with growing complexity, higher costs and increasing energy demands.
That challenge is already visible in the massive data centres being built to support artificial intelligence.
In the race to dominate AI, companies are constructing vast computing facilities that consume huge amounts of electricity and water. Reilly described this expansion as a “brute force” approach driven by the global competition to develop advanced AI systems.
Yet the demand for computing power continues to accelerate.
Artificial intelligence, advanced robotics, healthcare research, pharmaceuticals and cybersecurity all require far more processing capacity than today’s systems can easily deliver.
The question now facing the technology sector is whether traditional computing can keep up.
That is where quantum computing enters the conversation.
Unlike conventional computers, which process information using binary switches that represent ones and zeros, quantum computers exploit the unusual behaviour of particles at the atomic scale.
Reilly describes them as a fundamentally different type of machine.
“So a quantum computer is a wave computer,” he said.
Instead of processing information through simple on-off switches, quantum systems can use wave-like properties of particles to process many possible outcomes simultaneously.
Those waves can interact in complex ways, reinforcing correct solutions while cancelling out incorrect ones. In theory, this allows quantum systems to tackle certain types of problems dramatically faster than classical computers.
The concept may sound abstract, but its potential applications are significant.
Quantum computers are expected to transform areas such as materials science, chemical modelling and pharmaceutical development.
They could also help solve complex optimisation problems in logistics, finance and risk management.
For financial institutions in particular, the technology could offer new tools for detecting fraud, analysing market behaviour and optimising portfolios.
But the shift will not happen overnight.
“One message to take away is that quantum is not going to suddenly solve all of your problems,” Reilly said.
Instead, he said quantum systems will likely complement existing computing technologies as part of a broader and more diverse computing ecosystem.
One key change already emerging is how computing systems are physically designed.
Many next-generation technologies, including quantum processors, operate far more efficiently at extremely low temperatures. As a result, future data centres may rely heavily on cryogenic cooling systems to manage heat and energy consumption.
Reilly believes that the shift will gradually reshape the computing industry.
“Over the next five years, you’re going to see data centres go cold,” he said.
“And as that happens, they almost drag with them new compute paradigms.”
Emergence Quantum, the company he co-founded, is focused on developing technologies to support that transition, including cryogenic electronics and integrated hardware platforms designed for quantum computing and energy-efficient systems.
For investors and businesses, the technology remains in its early stages. But the scale of global interest is growing rapidly.
Governments, research institutions and technology companies are investing heavily in quantum research, betting it could become a foundational technology for the next generation of computing.
For Reilly, the moment feels similar to earlier technological turning points.
In the 19th century, new discoveries in thermodynamics helped drive the development of steam engines and the Industrial Revolution. In the 20th century, advances in electromagnetism led to radio, television and eventually the internet.
Quantum physics, he suggests, could represent the next chapter in that story.
“Today we have, as a society, in our hands new physics that we’re just beginning to figure out what to do with,” Reilly said.
“But I think it’s an exciting time to be alive and watch what happens over the coming decades.”
Here’s how they are looking at artificial intelligence, interest rates and economic pressures.
Warmer minimalism, tactile materials and wellness focused layouts are redefining luxury interiors as homeowners design for comfort, connection and lasting appeal.