Sam Bankman-Fried’s Lawyers Seek to Regain Ground in FTX Trial
Founder looks to rebound from cross-examination, with closing arguments expected to begin Wednesday
Founder looks to rebound from cross-examination, with closing arguments expected to begin Wednesday
Sam Bankman-Fried’s lawyers rested their case Tuesday after seeking to rehabilitate the FTX founder’s credibility from the prosecutors’ two-day grilling.
Bankman-Fried, dressed in a grey suit, floundered through the end of Assistant U.S. Attorney Danielle Sassoon’s cross-examination.
For a second day, Sassoon walked Bankman-Fried through balance sheets, communications and tweets, again highlighting inconsistencies—or what she portrayed as outright lies—between the defendant’s public statements and his private knowledge.
Bankman-Fried repeatedly told jurors he couldn’t recall many of his past statements. He said he couldn’t remember the exact time line of things.
Defence attorney Mark Cohen sought to elicit testimony to explain his client’s evasiveness. He asked about reasons for his foggy memory and his use of a private jet and his contempt for regulation.
“You used the phrase ‘f— regulators,’ ” Cohen said, referring to a series of messages between Bankman-Fried and a Vox reporter. “Was that the full extent of the chain?”
It wasn’t, said Bankman-Fried, adding that he felt that his efforts to work with regulators might have only led to more bad regulation. “I was somewhat frustrated,” he said.
Cohen asked about the huge amount of evidence in the case—suggesting his client couldn’t possibly remember every document—and his many media interviews.
Bankman-Fried told the jury he talked to about 50 reporters during the time between FTX’s collapse and his arrest, typically preparing between 30 seconds and an hour for each interview. When he testified before Congress, others helped him prepare his testimony, he said.
Bankman-Fried’s testimony, which formed the bulk of his defence team’s presentation, is likely crucial to jurors’ determination of whether to find him guilty of fraud and other charges. Closing arguments are scheduled for Wednesday, clearing the way for the jury to likely get the case on Thursday.
About half of the jurors watched Bankman-Fried as he spoke. Some scribbled notes and others gazed at the floor. One man closed his eyes. Damian Williams, the Manhattan U.S. attorney who has given priority to prosecuting cryptocurrency cases, sat in the front row of the courtroom gallery.
Bankman-Fried again answered some of the prosecutor’s questions by quibbling with their premise. When asked about an $8 billion hole in the balance sheet of Alameda Research, FTX’s sister hedge fund, he said that “hole” wasn’t the word he would use. He said he couldn’t speak with exact confidence about whether some FTX customers, outside of its sister hedge fund, had special privileges.
Sassoon asked if it was Bankman-Fried’s practice to maximise making money even with the risk of going bust. “It depends on the context,” he replied. He later added, “With respect to some of them, yes.”
Sassoon concluded her cross-examination by playing a recording of a Nov. 9, 2022, all-hands meeting in which Caroline Ellison, the former chief executive of Alameda Research and Bankman-Fried’s former girlfriend, spoke with Alameda staffers. Ellison, her voice halting, said she had talked about Alameda’s use of customer funds with Bankman-Fried and two of his top deputies, Nishad Singh and Gary Wang.
“Ms. Ellison identified you, Gary and Nishad as her co-conspirators, correct?” Sassoon asked.
Sassoon showed jurors a document, from Dec. 25, 2022, in which Bankman-Fried appeared to be analysing his own potential legal jeopardy and assessing how the government viewed the alleged conspiracy. While it was public that Ellison and Wang were cooperating with prosecutors, Bankman-Fried wasn’t sure if Singh, a former FTX executive, would be charged.
“They don’t seem to be keeping a seat warm for him as a defendant,” the document said.
“You wrote that, Mr. Bankman-Fried?” asked Sassoon. “I think so,” he said.
Singh, who later pleaded guilty, testified for the government earlier in the trial.
Later, Bankman-Fried’s lawyer referenced a photograph of Bankman-Fried on a private jet, reclining with his eyes closed. The prosecution had showed the jury the photo as an example of excess spending. Cohen asked Bankman-Fried if he remembered the photo.
“A very flattering one,” Bankman-Fried said sarcastically, before agreeing that using a private jet was a valid business expense.
“It was very logistically difficult to travel between the Bahamas and a few places, chiefly Washington, D.C.,” the FTX founder told the jury.
After the defence attorney wrapped up, Sassoon told the judge she had no more questions. Bankman-Fried took a long swig from his water bottle as he stepped down for his final time from the witness stand.
This stylish family home combines a classic palette and finishes with a flexible floorplan
Just 55 minutes from Sydney, make this your creative getaway located in the majestic Hawkesbury region.
Geoffrey Hinton hopes the prize will add credibility to his claims about the dangers of AI technology he pioneered
The newly minted Nobel laureate Geoffrey Hinton has a message about the artificial-intelligence systems he helped create: get more serious about safety or they could endanger humanity.
“I think we’re at a kind of bifurcation point in history where, in the next few years, we need to figure out if there’s a way to deal with that threat,” Hinton said in an interview Tuesday with a Nobel Prize official that mixed pride in his life’s work with warnings about the growing danger it poses.
The 76-year-old Hinton resigned from Google last year in part so he could talk more about the possibility that AI systems could escape human control and influence elections or power dangerous robots. Along with other experienced AI researchers, he has called on such companies as OpenAI, Meta Platforms and Alphabet -owned Google to devote more resources to the safety of the advanced systems that they are competing against each other to develop as quickly as possible.
Hinton’s Nobel win has provided a new platform for his doomsday warnings at the same time it celebrates his critical role in advancing the technologies fueling them. Hinton has argued that advanced AI systems are capable of understanding their outputs, a controversial view in research circles.
“Hopefully, it will make me more credible when I say these things really do understand what they’re saying,” he said of the prize.
Hinton’s views have pitted him against factions of the AI community that believe dwelling on doomsday scenarios needlessly slows technological progress or distracts from more immediate harms, such as discrimination against minority groups .
“I think that he’s a smart guy, but I think a lot of people have way overhyped the risk of these things, and that’s really convinced a lot of the general public that this is what we should be focusing on, not the more immediate harms of AI,” said Melanie Mitchell, a professor at the Santa Fe Institute, during a panel last year.
Hinton visited Google’s Silicon Valley headquarters Tuesday for an informal celebration, and some of the company’s top AI executives congratulated him on social media.
On Wednesday, other prominent Googlers specialising in AI were also awarded a Nobel Prize. Demis Hassabis, chief executive of Google DeepMind, and John M. Jumper, director at the AI lab, were part of a group of three scientists who won the chemistry prize for their work on predicting the shape of proteins.
Hinton is sharing the Nobel Prize in physics with John Hopfield of Princeton University for their work since the 1980s on neural networks that process information in ways inspired by the human brain. That work is the basis for many of the AI technologies in use today, from ChatGPT’s humanlike conversations to Google Photos’ ability to recognise who is in every picture you take.
“Their contributions to connect fundamental concepts in physics with concepts in biology, not just AI—these concepts are still with us today,” said Yoshua Bengio , an AI researcher at the University of Montreal.
In 2012, Hinton worked with two of his University of Toronto graduate students, Alex Krizhevsky and Ilya Sutskever, on a neural network called AlexNet programmed to recognise images in photos. Until that point, computer algorithms had often been unable to tell that a picture of a dog was really a dog and not a cat or a car.
AlexNet’s blowout victory at a 2012 contest for image-recognition technology was a pivotal moment in the development of the modern AI boom, as it proved the power of neural nets over other approaches.
That same year, Hinton started a company with Krizhevsky and Sutskever that turned out to be short-lived. Google acquired it in 2013 in an auction against competitors including Baidu and Microsoft, paying $44 million essentially to hire the three men, according to the book “Genius Makers.” Hinton began splitting time between the University of Toronto and Google, where he continued research on neural networks.
Hinton is widely revered as a mentor for the current generation of top AI researchers including Sutskever, who co-founded OpenAI before leaving this spring to start a company called Safe Superintelligence.
Hinton received the 2018 Turing Award, a computer-science prize, for his work on neural networks alongside Bengio and a fellow AI researcher, Yann LeCun . The three are often referred to as the modern “godfathers of AI.”
By 2023, Hinton had become alarmed about the consequences of building more powerful artificial intelligence. He began talking about the possibility that AI systems could escape the control of their creators and cause catastrophic harm to humanity. In doing so, he aligned himself with a vocal movement of people concerned about the existential risks of the technology.
“We’re in a situation that most people can’t even conceive of, which is that these digital intelligences are going to be a lot smarter than us, and if they want to get stuff done, they’re going to want to take control,” Hinton said in an interview last year.
Hinton announced he was leaving Google in spring 2023, saying he wanted to be able to freely discuss the dangers of AI without worrying about consequences for the company. Google had acted “very responsibly,” he said in an X post.
In the subsequent months, Hinton has spent much of his time speaking to policymakers and tech executives, including Elon Musk , about AI risks.
Hinton cosigned a paper last year saying companies doing AI work should allocate at least one-third of their research and development resources to ensuring the safety and ethical use of their systems.
“One thing governments can do is force the big companies to spend a lot more of their resources on safety research, so that for example companies like OpenAI can’t just put safety research on the back burner,” Hinton said in the Nobel interview.
An OpenAI spokeswoman said the company is proud of its safety work.
With Bengio and other researchers, Hinton supported an artificial-intelligence safety bill passed by the California Legislature this summer that would have required developers of large AI systems to take a number of steps to ensure they can’t cause catastrophic damage. Gov. Gavin Newsom recently vetoed the bill , which was opposed by most big tech companies including Google.
Hinton’s increased activism has put him in opposition to other respected researchers who believe his warnings are fantastical because AI is far from having the capability to cause serious harm.
“Their complete lack of understanding of the physical world and lack of planning abilities put them way below cat-level intelligence, never mind human-level,” LeCun wrote in a response to Hinton on X last year.
This stylish family home combines a classic palette and finishes with a flexible floorplan
Just 55 minutes from Sydney, make this your creative getaway located in the majestic Hawkesbury region.