Business Schools Are Going All In on AI
Kanebridge News
Share Button

Business Schools Are Going All In on AI

American University, other top M.B.A. programs reorient courses around artificial intelligence; ‘It has eaten our world’

By LINDSAY ELLIS
Fri, Apr 5, 2024 7:00amGrey Clock 4 min

At the Wharton School this spring, Prof. Ethan Mollick assigned students the task of automating away part of their jobs.

Mollick tells his students at the University of Pennsylvania to expect to feel insecure about their own capabilities once they understand what artificial intelligence can do.

“You haven’t used AI until you’ve had an existential crisis,” he said. “You need three sleepless nights.”

Top business schools are pushing M.B.A. candidates and undergraduates to use artificial intelligence as a second brain. Students are eager for the instruction as employers increasingly hire talent with AI skills .

American University’s Kogod School of Business is putting an unusually high emphasis on AI, threading teaching on the technology through 20 new or adapted classes, from forensic accounting to marketing, which will roll out next school year. Professors this week started training on how to use and teach AI tools.

Understanding and using AI is now a foundational concept, much like learning to write or reason, said David Marchick, dean of Kogod.

“Every young person needs to know how to use AI in whatever they do,” he said of the decision to embed AI instruction into every part of the business school’s undergraduate core curriculum.

Marchick, who uses ChatGPT to prep presentations to alumni and professors, ordered a review of Kogod’s coursework in December after Brett Wilson, a venture capitalist with Swift Ventures, visited campus and told students that they wouldn’t lose jobs to AI, but rather to professionals who are more skilled in deploying it.

American’s new AI classwork will include text mining, predictive analytics and using ChatGPT to prepare for negotiations, whether navigating workplace conflict or advocating for a promotion. New courses include one on AI in human-resource management and a new business and entertainment class focused on AI, a core issue of last year’s Hollywood writers strike.

Officials and faculty at Columbia Business School and Duke University’s Fuqua School of Business say fluency in AI will be key to graduates’ success in the corporate world, allowing them to climb the ranks of management. Forty percent of prospective business-school students surveyed by the Graduate Management Admission Council said learning AI is essential to a graduate business degree—a jump from 29% in 2022.

Many of them are also anxious that their jobs could be replaced by generative AI. Much of entry-level work could be automated, the management-consulting group Oliver Wyman projected in a recent report. That means that future early-career jobs might require a more muscular skillset and more closely resemble first-level management roles .

Faster thinking

Business-school professors are now encouraging students to use generative AI as a tool, akin to a calculator for doing math.

M.B.A.s should be using AI to generate ideas quickly and comprehensively, according to Sheena Iyengar, a Columbia Business School professor who wrote “Think Bigger,” a book on innovation. But it’s still up to people to make good decisions and ask the technology the right questions.

“You still have to direct it, otherwise it will give you crap,” she said. “You cannot eliminate human judgment.”

One exercise that Iyengar walks her students through is using AI to generate business idea pitches from the automated perspectives of Tom Brady, Martha Stewart and Barack Obama. The assignment illustrates how ideas can be reframed for different audiences and based on different points of view.

Blake Bergeron, a 27-year-old M.B.A. student at Columbia, used generative AI to brainstorm new business ideas for a project last fall. One it returned was a travel service that recommends destinations based on a person’s social networks, pulling data from their friends’ posts. Bergeron’s team asked the AI to pressure-test the idea, coming up with pros and cons, and for potential business models.

Bergeron said he noticed pitfalls as he experimented. When his team asked the generative AI tool for ways to market the travel service, it spit out a group of very similar ideas. From there, Bergeron said, the students had to coax the tool to get creative, asking for one out-of-the-box idea at a time.

Professors say that through this instruction, they hope students learn where AI is currently weak. Mathematics and citations are two areas where mistakes abound. At Kogod this week, executives who were training professors in AI stressed that adopters of the technology needed to do a human review and edit all AI-generated content, including analysis, before sharing the materials.

Faster doing

When Robert Bray, who teaches operations management at Northwestern’s Kellogg School of Management, realised that ChatGPT could answer nearly every question in the textbook he uses for his data analytics course, he updated the syllabus. Last year, he started to focus on teaching coding using large-language models, which are trained on vast amounts of data to generate text and code. Enrolment jumped to 55 from 21 M.B.A. students, he said.

Before, engineers had an edge against business graduates because of their technical expertise, but now M.B.A.s can use AI to compete in that zone, Bray said.

He encourages his students to offload as much work as possible to AI, treating it like “a really proficient intern.”

Ben Morton, one of Bray’s students, is bullish on AI but knows he needs to be able to work without it. He did some coding with ChatGPT for class and wondered: If ChatGPT were down for a week, could he still get work done?

Learning to code with the help of generative AI sped up his development.

“I know so much more about programming than I did six months ago,” said Morton, 27. “Everyone’s capabilities are exponentially increasing.”

Several professors said they can teach more material with AI’s assistance. One said that because AI could solve his lab assignments, he no longer needed much of the class time for those activities. With the extra hours he has students present to their peers on AI innovations. Campus is where students should think through how to use AI responsibly, said Bill Boulding , dean of Duke’s Fuqua School.

“How do we embrace it? That is the right way to approach this—we can’t stop this,” he said. “It has eaten our world. It will eat everyone else’s world.”



MOST POPULAR
11 ACRES ROAD, KELLYVILLE, NSW

This stylish family home combines a classic palette and finishes with a flexible floorplan

35 North Street Windsor

Just 55 minutes from Sydney, make this your creative getaway located in the majestic Hawkesbury region.

Related Stories
Lifestyle
Should AI Have Access to Your Medical Records? What if It Can Save Many Lives?
By DEMETRIA GALLEGOS 28/05/2024
Lifestyle
How an Ex-Teacher Turned a Tiny Pension Into a Giant-Killer
By MATT WIRZ 27/05/2024
Lifestyle
The Problem With Behavioural Nudges
By Evan Polman and Sam J. Maglio 27/05/2024
Should AI Have Access to Your Medical Records? What if It Can Save Many Lives?

We asked readers: Is it worth giving up some potential privacy if the public benefit could be great? Here’s what they said.

By DEMETRIA GALLEGOS
Tue, May 28, 2024 4 min

We’re constantly told that one of the potentially biggest benefits of artificial intelligence is in the area of health. By collecting large amounts of data, AI can create all sorts of drugs for diseases that have been resistant to treatment.

But the price of that could be that we have to share more of our medical information. After all, researchers can’t collect large amounts of data if people aren’t willing to part with that data.

We wanted to see where our readers stand on the balance of privacy versus public-health gains as part of our series on ethical dilemmas created by the advent of AI.

Here are the questions we posed…

AI may be able to discover new medical treatments if it can scan large volumes of health records. Should our personal health records be made available for this purpose, if it has the potential to improve or save millions of lives? How would we guard privacy in that case?

…and some of the answers we received. undefined

Rely on nonpartisan overseers

While my own recent experience with a data breach highlights the importance of robust data security, I recognise the potential for AI to revolutionise healthcare. To ensure privacy, I would be more comfortable if an independent, nonpartisan body—overseen by medical professionals, data-security experts, and citizen representatives—managed a secure database.

Anonymity cuts both ways

Yes. Simply sanitise the health records of any identifying information, which is quite doable. Although there is an argument to be made that AI may discover something that an individual needs or wants to know.

Executive-level oversight

I think we can make AI scanning of health records available with strict privacy controls. Create an AI-CEO position at medical facilities with extreme vetting of that individual before hiring them.

Well worth it

This actually sounds like a very GOOD use of AI. There are several methods for anonymising data which would allow for studies over massive cross-sections of the population without compromising individuals’ privacy. The AI would just be doing the same things meta-studies do now, only faster and maybe better.

Human touch

My concern is that the next generations of doctors will rely more heavily, maybe exclusively, on AI and lose the ability or even the desire to respect the art of medicine which demands one-on-one interaction with a patient for discussion and examination (already a dying skill).

Postmortem

People should be able to sign over rights to their complete “anonymised” health record upon death just as they can sign over rights to their organs. Waiting for death for such access does temporarily slow down the pace of such research, but ultimately will make the research better. Data sets will be more complete, too. Before signing over such rights, however, a person would have to be fully informed on how their relatives’ privacy may also be affected.

Pay me or make it free for all

As long as this is open-source and free, they can use my records. I have a problem with people using my data to make a profit without compensation.

Privacy above all

As a free society, we value freedoms and privacy, often over greater utilitarian benefits that could come. AI does not get any greater right to infringe on that liberty than anything else does.

Opt-in only

You should be able to opt in and choose a plan that protects your privacy.

Privacy doesn’t exist anyway

If it is decided to extend human lives indefinitely, then by all means, scan all health records. As for privacy, there is no such thing. All databases, once established, will eventually, if not immediately, be accessed or hacked by both the good and bad guys.

The data’s already out there

I think it should be made available. We already sign our rights for information over to large insurance companies. Making health records in the aggregate available for helping AI spot potential ways to improve medical care makes sense to me.

Overarching benefit

Of course they should be made available. Privacy is no serious concern when the benefits are so huge for so many.

Compensation for breakthroughs

We should be given the choice to release our records and compensated if our particular genome creates a pathway to treatment and medications.

Too risky

I like the idea of improving healthcare by accessing health records. However, as great as that potential is, the risks outweigh it. Access to the information would not be controlled. Too many would see personal opportunity in it for personal gain.

Nothing personal

The personal info should never be available to anyone who is not specifically authorised by the patient to have it. Medical information can be used to deny people employment or licenses!

No guarantee, but go ahead

This should be allowed on an anonymous basis, without question. But how to provide that anonymity?

Anonymously isolating the information is probably easy, but that information probably contains enough information to identify you if someone had access to the data and was strongly motivated. So the answer lies in restricting access to the raw data to trusted individuals.

Take my records, please

As a person with multiple medical conditions taking 28 medications a day, I highly endorse the use of my records. It is an area where I have found AI particularly valuable. With no medical educational background, I find it very helpful when AI describes in layman’s terms both my conditions and medications. In one instance, while interpreting a CT scan, AI noted a growth on my kidney that looked suspiciously like cancer and had not been disclosed to me by any of the four doctors examining the chart.

MOST POPULAR
11 ACRES ROAD, KELLYVILLE, NSW

This stylish family home combines a classic palette and finishes with a flexible floorplan

35 North Street Windsor

Just 55 minutes from Sydney, make this your creative getaway located in the majestic Hawkesbury region.

Related Stories
Lifestyle
Anger Does a Lot More Damage to Your Body Than You Realise
By SUMATHI REDDY 24/05/2024
Money
The Loneliness of the American Worker
By TE-PING CHEN 29/05/2024
Money
The sticky economic factor making an interest rate drop unlikely this year
By Bronwyn Allen 30/05/2024
0
    Your Cart
    Your cart is emptyReturn to Shop