The RBA Holds Interest Rates Firm
The central bank details Delta’s predicted impact on the economy.
The central bank details Delta’s predicted impact on the economy.
The latest statement by the Reserve Bank of Australia’s Monetary Policy Governor Dr Philip Lowe, has revealed that interest rates remain on hold, as the central bank maintains its target of 10 basis points for the April 2024 Australian Government Bond.
In the statement, Dr Lowe references the considerable momentum of the Australian economy prior to the Delta outbreak.
“GDP increased by 0.7 per cent in the June quarter and by nearly 10 per cent over the year. Business investment was picking up and the labour market had strengthened. The unemployment rate had fallen below 5 per cent and job vacancies were at a high level,” reads the statement by Dr Lowe.
However, it is noted that the economic recovery has been interrupted and that GDP is expected to decline in the September quarter as unemployment figures move higher over the coming months.
On housing, Lowe states:” Housing prices are continuing to rise, although turnover in some markets has declined following the virus outbreak. Housing credit growth has picked up due to stronger demand for credit by both owner-occupiers and investors.”
However, the RBA has maintained its position is to monitor borrowing stating, “Given the environment of rising housing prices and low-interest rates, the Bank is monitoring trends in housing borrowing carefully and it is important that lending standards are maintained.”
Despite the impact of the Delta strain, the RBA is confident that the setback to the economy will be temporary stating that “The delta outbreak is expected to delay, but not derail, the recovery.”
This stylish family home combines a classic palette and finishes with a flexible floorplan
Just 55 minutes from Sydney, make this your creative getaway located in the majestic Hawkesbury region.
Geoffrey Hinton hopes the prize will add credibility to his claims about the dangers of AI technology he pioneered
The newly minted Nobel laureate Geoffrey Hinton has a message about the artificial-intelligence systems he helped create: get more serious about safety or they could endanger humanity.
“I think we’re at a kind of bifurcation point in history where, in the next few years, we need to figure out if there’s a way to deal with that threat,” Hinton said in an interview Tuesday with a Nobel Prize official that mixed pride in his life’s work with warnings about the growing danger it poses.
The 76-year-old Hinton resigned from Google last year in part so he could talk more about the possibility that AI systems could escape human control and influence elections or power dangerous robots. Along with other experienced AI researchers, he has called on such companies as OpenAI, Meta Platforms and Alphabet -owned Google to devote more resources to the safety of the advanced systems that they are competing against each other to develop as quickly as possible.
Hinton’s Nobel win has provided a new platform for his doomsday warnings at the same time it celebrates his critical role in advancing the technologies fueling them. Hinton has argued that advanced AI systems are capable of understanding their outputs, a controversial view in research circles.
“Hopefully, it will make me more credible when I say these things really do understand what they’re saying,” he said of the prize.
Hinton’s views have pitted him against factions of the AI community that believe dwelling on doomsday scenarios needlessly slows technological progress or distracts from more immediate harms, such as discrimination against minority groups .
“I think that he’s a smart guy, but I think a lot of people have way overhyped the risk of these things, and that’s really convinced a lot of the general public that this is what we should be focusing on, not the more immediate harms of AI,” said Melanie Mitchell, a professor at the Santa Fe Institute, during a panel last year.
Hinton visited Google’s Silicon Valley headquarters Tuesday for an informal celebration, and some of the company’s top AI executives congratulated him on social media.
On Wednesday, other prominent Googlers specialising in AI were also awarded a Nobel Prize. Demis Hassabis, chief executive of Google DeepMind, and John M. Jumper, director at the AI lab, were part of a group of three scientists who won the chemistry prize for their work on predicting the shape of proteins.
Hinton is sharing the Nobel Prize in physics with John Hopfield of Princeton University for their work since the 1980s on neural networks that process information in ways inspired by the human brain. That work is the basis for many of the AI technologies in use today, from ChatGPT’s humanlike conversations to Google Photos’ ability to recognise who is in every picture you take.
“Their contributions to connect fundamental concepts in physics with concepts in biology, not just AI—these concepts are still with us today,” said Yoshua Bengio , an AI researcher at the University of Montreal.
In 2012, Hinton worked with two of his University of Toronto graduate students, Alex Krizhevsky and Ilya Sutskever, on a neural network called AlexNet programmed to recognise images in photos. Until that point, computer algorithms had often been unable to tell that a picture of a dog was really a dog and not a cat or a car.
AlexNet’s blowout victory at a 2012 contest for image-recognition technology was a pivotal moment in the development of the modern AI boom, as it proved the power of neural nets over other approaches.
That same year, Hinton started a company with Krizhevsky and Sutskever that turned out to be short-lived. Google acquired it in 2013 in an auction against competitors including Baidu and Microsoft, paying $44 million essentially to hire the three men, according to the book “Genius Makers.” Hinton began splitting time between the University of Toronto and Google, where he continued research on neural networks.
Hinton is widely revered as a mentor for the current generation of top AI researchers including Sutskever, who co-founded OpenAI before leaving this spring to start a company called Safe Superintelligence.
Hinton received the 2018 Turing Award, a computer-science prize, for his work on neural networks alongside Bengio and a fellow AI researcher, Yann LeCun . The three are often referred to as the modern “godfathers of AI.”
By 2023, Hinton had become alarmed about the consequences of building more powerful artificial intelligence. He began talking about the possibility that AI systems could escape the control of their creators and cause catastrophic harm to humanity. In doing so, he aligned himself with a vocal movement of people concerned about the existential risks of the technology.
“We’re in a situation that most people can’t even conceive of, which is that these digital intelligences are going to be a lot smarter than us, and if they want to get stuff done, they’re going to want to take control,” Hinton said in an interview last year.
Hinton announced he was leaving Google in spring 2023, saying he wanted to be able to freely discuss the dangers of AI without worrying about consequences for the company. Google had acted “very responsibly,” he said in an X post.
In the subsequent months, Hinton has spent much of his time speaking to policymakers and tech executives, including Elon Musk , about AI risks.
Hinton cosigned a paper last year saying companies doing AI work should allocate at least one-third of their research and development resources to ensuring the safety and ethical use of their systems.
“One thing governments can do is force the big companies to spend a lot more of their resources on safety research, so that for example companies like OpenAI can’t just put safety research on the back burner,” Hinton said in the Nobel interview.
An OpenAI spokeswoman said the company is proud of its safety work.
With Bengio and other researchers, Hinton supported an artificial-intelligence safety bill passed by the California Legislature this summer that would have required developers of large AI systems to take a number of steps to ensure they can’t cause catastrophic damage. Gov. Gavin Newsom recently vetoed the bill , which was opposed by most big tech companies including Google.
Hinton’s increased activism has put him in opposition to other respected researchers who believe his warnings are fantastical because AI is far from having the capability to cause serious harm.
“Their complete lack of understanding of the physical world and lack of planning abilities put them way below cat-level intelligence, never mind human-level,” LeCun wrote in a response to Hinton on X last year.
This stylish family home combines a classic palette and finishes with a flexible floorplan
Just 55 minutes from Sydney, make this your creative getaway located in the majestic Hawkesbury region.