First Bitcoin Futures ETF Rises In Trading Debut
ProShares Bitcoin Strategy ETF advances nearly 5% following its closely watched launch.
ProShares Bitcoin Strategy ETF advances nearly 5% following its closely watched launch.
The first bitcoin-focused exchange-traded fund rose in its trading debut Tuesday after getting a warm reception from investors.
The ProShares Bitcoin Strategy ETF climbed most of the day, gaining nearly 5% to settle at US$41.94. About US$981 million of shares changed hands over the session, making it the second-most highly-traded ETF debut ever, according to Elisabeth Kashner, director of ETF research at FactSet.
The launch is being closely watched on Wall Street, where finding a way to sell securities linked to bitcoin has been a priority for many firms. Bethesda, Md.-based ProShares rang the bell at the New York Stock Exchange on Tuesday to celebrate the launch of its ETF, which goes by the ticker BITO and holds bitcoin futures contracts rather than the cryptocurrency.
“There are a multitude of investors who have brokerage accounts and are comfortable buying stocks and ETFs,” said ProShares Chief Executive Michael Sapir in an interview. “We think this will appeal to them.”
Among the fund’s first-day investors was Thomas Johnson, who is 33 years old and works in pharmaceutical sales in Orlando, Fla. Soon after the fund started trading, Mr. Johnson said he used about 15% of the assets in his retirement account to buy shares of the fund.
“I see cryptocurrencies as a whole as something that will outperform the stock market,” said Mr. Johnson.
He added that it was his first ever purchase of an ETF, although he started buying bitcoin a year earlier.
Other asset managers are expected to launch similar funds, including Valkyrie Investments, VanEck and others. But one of the biggest global asset-management firms, Invesco, on Monday put its bitcoin futures ETF on hold.
“We have determined not to pursue the launch of a Bitcoin futures ETF in the immediate near term,” an Invesco spokeswoman said in a statement. The firm said it is committed to working with its partner, Galaxy Digital Holdings, on an ETF that holds crypto rather than futures.
Invesco didn’t elaborate on the decision.
The firm amended its filing late Monday, pushing the fund’s effective date toward the end of the month rather than withdrawing it altogether, signalling the ETF might still launch later on.
Thomas Lee, a managing partner at research firm Fundstrat Advisors, said the ProShares ETF will enable more individuals to invest in bitcoin. He said assets in the fund could rise to as much as $50 billion from the $20 million the fund started with on Tuesday.
“This will drive higher asset prices via network effects,” Mr. Lee said. He said bitcoin could rise to $168,000 from a recent $64,000.
Bitcoin has climbed 48% since September, reflecting in part purchases driven by the prospective launch of the ProShares ETF and rivals.
The ETF came online following an eight-year effort by asset managers to create funds that hold actual bitcoins. The Securities and Exchange Commission, which hasn’t supported that approach because of concerns that bitcoin trading isn’t transparent enough to protect investors from fraud and manipulation, instead steered asset managers toward the creation of a bitcoin futures product.
Unlike digital currencies, futures trade on regulated venues such as the Chicago Mercantile Exchange.
Futures-based ETFs are sometimes hampered by discrepancies between the futures market and the underlying assets they track.
Asset managers say that is a trade-off some investors are likely willing to make to get exposure to crypto through the more-regulated futures market.
“That’s what I’m counting on. Other investors will see value in the ETF, or at least more of a safety net and be more willing to invest” in crypto, added Mr. Johnson.
Even with the promise of regulatory oversight, SEC Chairman Gary Gensler warned investors Tuesday that bitcoin futures remain just as risky as the cryptocurrency itself.
“It’s still a highly speculative asset class and listeners should understand that underneath this, it still has that same aspect of volatility and speculation,” Mr. Gensler said in a CNBC interview.
Reprinted by permission of The Wall Street Journal, Copyright 2021 Dow Jones & Company. Inc. All Rights Reserved Worldwide. Original date of publication: October 19, 2021.
This stylish family home combines a classic palette and finishes with a flexible floorplan
Just 55 minutes from Sydney, make this your creative getaway located in the majestic Hawkesbury region.
Geoffrey Hinton hopes the prize will add credibility to his claims about the dangers of AI technology he pioneered
The newly minted Nobel laureate Geoffrey Hinton has a message about the artificial-intelligence systems he helped create: get more serious about safety or they could endanger humanity.
“I think we’re at a kind of bifurcation point in history where, in the next few years, we need to figure out if there’s a way to deal with that threat,” Hinton said in an interview Tuesday with a Nobel Prize official that mixed pride in his life’s work with warnings about the growing danger it poses.
The 76-year-old Hinton resigned from Google last year in part so he could talk more about the possibility that AI systems could escape human control and influence elections or power dangerous robots. Along with other experienced AI researchers, he has called on such companies as OpenAI, Meta Platforms and Alphabet -owned Google to devote more resources to the safety of the advanced systems that they are competing against each other to develop as quickly as possible.
Hinton’s Nobel win has provided a new platform for his doomsday warnings at the same time it celebrates his critical role in advancing the technologies fueling them. Hinton has argued that advanced AI systems are capable of understanding their outputs, a controversial view in research circles.
“Hopefully, it will make me more credible when I say these things really do understand what they’re saying,” he said of the prize.
Hinton’s views have pitted him against factions of the AI community that believe dwelling on doomsday scenarios needlessly slows technological progress or distracts from more immediate harms, such as discrimination against minority groups .
“I think that he’s a smart guy, but I think a lot of people have way overhyped the risk of these things, and that’s really convinced a lot of the general public that this is what we should be focusing on, not the more immediate harms of AI,” said Melanie Mitchell, a professor at the Santa Fe Institute, during a panel last year.
Hinton visited Google’s Silicon Valley headquarters Tuesday for an informal celebration, and some of the company’s top AI executives congratulated him on social media.
On Wednesday, other prominent Googlers specialising in AI were also awarded a Nobel Prize. Demis Hassabis, chief executive of Google DeepMind, and John M. Jumper, director at the AI lab, were part of a group of three scientists who won the chemistry prize for their work on predicting the shape of proteins.
Hinton is sharing the Nobel Prize in physics with John Hopfield of Princeton University for their work since the 1980s on neural networks that process information in ways inspired by the human brain. That work is the basis for many of the AI technologies in use today, from ChatGPT’s humanlike conversations to Google Photos’ ability to recognise who is in every picture you take.
“Their contributions to connect fundamental concepts in physics with concepts in biology, not just AI—these concepts are still with us today,” said Yoshua Bengio , an AI researcher at the University of Montreal.
In 2012, Hinton worked with two of his University of Toronto graduate students, Alex Krizhevsky and Ilya Sutskever, on a neural network called AlexNet programmed to recognise images in photos. Until that point, computer algorithms had often been unable to tell that a picture of a dog was really a dog and not a cat or a car.
AlexNet’s blowout victory at a 2012 contest for image-recognition technology was a pivotal moment in the development of the modern AI boom, as it proved the power of neural nets over other approaches.
That same year, Hinton started a company with Krizhevsky and Sutskever that turned out to be short-lived. Google acquired it in 2013 in an auction against competitors including Baidu and Microsoft, paying $44 million essentially to hire the three men, according to the book “Genius Makers.” Hinton began splitting time between the University of Toronto and Google, where he continued research on neural networks.
Hinton is widely revered as a mentor for the current generation of top AI researchers including Sutskever, who co-founded OpenAI before leaving this spring to start a company called Safe Superintelligence.
Hinton received the 2018 Turing Award, a computer-science prize, for his work on neural networks alongside Bengio and a fellow AI researcher, Yann LeCun . The three are often referred to as the modern “godfathers of AI.”
By 2023, Hinton had become alarmed about the consequences of building more powerful artificial intelligence. He began talking about the possibility that AI systems could escape the control of their creators and cause catastrophic harm to humanity. In doing so, he aligned himself with a vocal movement of people concerned about the existential risks of the technology.
“We’re in a situation that most people can’t even conceive of, which is that these digital intelligences are going to be a lot smarter than us, and if they want to get stuff done, they’re going to want to take control,” Hinton said in an interview last year.
Hinton announced he was leaving Google in spring 2023, saying he wanted to be able to freely discuss the dangers of AI without worrying about consequences for the company. Google had acted “very responsibly,” he said in an X post.
In the subsequent months, Hinton has spent much of his time speaking to policymakers and tech executives, including Elon Musk , about AI risks.
Hinton cosigned a paper last year saying companies doing AI work should allocate at least one-third of their research and development resources to ensuring the safety and ethical use of their systems.
“One thing governments can do is force the big companies to spend a lot more of their resources on safety research, so that for example companies like OpenAI can’t just put safety research on the back burner,” Hinton said in the Nobel interview.
An OpenAI spokeswoman said the company is proud of its safety work.
With Bengio and other researchers, Hinton supported an artificial-intelligence safety bill passed by the California Legislature this summer that would have required developers of large AI systems to take a number of steps to ensure they can’t cause catastrophic damage. Gov. Gavin Newsom recently vetoed the bill , which was opposed by most big tech companies including Google.
Hinton’s increased activism has put him in opposition to other respected researchers who believe his warnings are fantastical because AI is far from having the capability to cause serious harm.
“Their complete lack of understanding of the physical world and lack of planning abilities put them way below cat-level intelligence, never mind human-level,” LeCun wrote in a response to Hinton on X last year.
This stylish family home combines a classic palette and finishes with a flexible floorplan
Just 55 minutes from Sydney, make this your creative getaway located in the majestic Hawkesbury region.