Business Schools Are Going All In on AI
American University, other top M.B.A. programs reorient courses around artificial intelligence; ‘It has eaten our world’
American University, other top M.B.A. programs reorient courses around artificial intelligence; ‘It has eaten our world’
At the Wharton School this spring, Prof. Ethan Mollick assigned students the task of automating away part of their jobs.
Mollick tells his students at the University of Pennsylvania to expect to feel insecure about their own capabilities once they understand what artificial intelligence can do.
“You haven’t used AI until you’ve had an existential crisis,” he said. “You need three sleepless nights.”
Top business schools are pushing M.B.A. candidates and undergraduates to use artificial intelligence as a second brain. Students are eager for the instruction as employers increasingly hire talent with AI skills .
American University’s Kogod School of Business is putting an unusually high emphasis on AI, threading teaching on the technology through 20 new or adapted classes, from forensic accounting to marketing, which will roll out next school year. Professors this week started training on how to use and teach AI tools.
Understanding and using AI is now a foundational concept, much like learning to write or reason, said David Marchick, dean of Kogod.
“Every young person needs to know how to use AI in whatever they do,” he said of the decision to embed AI instruction into every part of the business school’s undergraduate core curriculum.
Marchick, who uses ChatGPT to prep presentations to alumni and professors, ordered a review of Kogod’s coursework in December after Brett Wilson, a venture capitalist with Swift Ventures, visited campus and told students that they wouldn’t lose jobs to AI, but rather to professionals who are more skilled in deploying it.
American’s new AI classwork will include text mining, predictive analytics and using ChatGPT to prepare for negotiations, whether navigating workplace conflict or advocating for a promotion. New courses include one on AI in human-resource management and a new business and entertainment class focused on AI, a core issue of last year’s Hollywood writers strike.
Officials and faculty at Columbia Business School and Duke University’s Fuqua School of Business say fluency in AI will be key to graduates’ success in the corporate world, allowing them to climb the ranks of management. Forty percent of prospective business-school students surveyed by the Graduate Management Admission Council said learning AI is essential to a graduate business degree—a jump from 29% in 2022.
Many of them are also anxious that their jobs could be replaced by generative AI. Much of entry-level work could be automated, the management-consulting group Oliver Wyman projected in a recent report. That means that future early-career jobs might require a more muscular skillset and more closely resemble first-level management roles .
Business-school professors are now encouraging students to use generative AI as a tool, akin to a calculator for doing math.
M.B.A.s should be using AI to generate ideas quickly and comprehensively, according to Sheena Iyengar, a Columbia Business School professor who wrote “Think Bigger,” a book on innovation. But it’s still up to people to make good decisions and ask the technology the right questions.
“You still have to direct it, otherwise it will give you crap,” she said. “You cannot eliminate human judgment.”
One exercise that Iyengar walks her students through is using AI to generate business idea pitches from the automated perspectives of Tom Brady, Martha Stewart and Barack Obama. The assignment illustrates how ideas can be reframed for different audiences and based on different points of view.
Blake Bergeron, a 27-year-old M.B.A. student at Columbia, used generative AI to brainstorm new business ideas for a project last fall. One it returned was a travel service that recommends destinations based on a person’s social networks, pulling data from their friends’ posts. Bergeron’s team asked the AI to pressure-test the idea, coming up with pros and cons, and for potential business models.
Bergeron said he noticed pitfalls as he experimented. When his team asked the generative AI tool for ways to market the travel service, it spit out a group of very similar ideas. From there, Bergeron said, the students had to coax the tool to get creative, asking for one out-of-the-box idea at a time.
Professors say that through this instruction, they hope students learn where AI is currently weak. Mathematics and citations are two areas where mistakes abound. At Kogod this week, executives who were training professors in AI stressed that adopters of the technology needed to do a human review and edit all AI-generated content, including analysis, before sharing the materials.
When Robert Bray, who teaches operations management at Northwestern’s Kellogg School of Management, realised that ChatGPT could answer nearly every question in the textbook he uses for his data analytics course, he updated the syllabus. Last year, he started to focus on teaching coding using large-language models, which are trained on vast amounts of data to generate text and code. Enrolment jumped to 55 from 21 M.B.A. students, he said.
Before, engineers had an edge against business graduates because of their technical expertise, but now M.B.A.s can use AI to compete in that zone, Bray said.
He encourages his students to offload as much work as possible to AI, treating it like “a really proficient intern.”
Ben Morton, one of Bray’s students, is bullish on AI but knows he needs to be able to work without it. He did some coding with ChatGPT for class and wondered: If ChatGPT were down for a week, could he still get work done?
Learning to code with the help of generative AI sped up his development.
“I know so much more about programming than I did six months ago,” said Morton, 27. “Everyone’s capabilities are exponentially increasing.”
Several professors said they can teach more material with AI’s assistance. One said that because AI could solve his lab assignments, he no longer needed much of the class time for those activities. With the extra hours he has students present to their peers on AI innovations. Campus is where students should think through how to use AI responsibly, said Bill Boulding , dean of Duke’s Fuqua School.
“How do we embrace it? That is the right way to approach this—we can’t stop this,” he said. “It has eaten our world. It will eat everyone else’s world.”
With the debut of DeepSeek’s buzzy chatbot and updates to others, we tried applying the technology—and a little human common sense—to the most mind-melting aspect of home cooking: weekly meal planning.
An intriguing new holiday home concept is emerging for high net worth Australians.
With the debut of DeepSeek’s buzzy chatbot and updates to others, we tried applying the technology—and a little human common sense—to the most mind-melting aspect of home cooking: weekly meal planning.
Read the news, and it won’t take long to find a story about the latest feat of artificial intelligence. AI passed the bar exam! It can help diagnose cancer! It “painted” a portrait that sold at Sotheby’s for $1 million!
My own great hope for AI: that it might simplify the everyday problem of meal planning.
Seem a bit unambitious? Think again. For more than two decades as a food writer, I’ve watched families struggle to get weeknight meals on the table. One big obstacle is putting in the upfront time to devise a variety of easy meals that fit both budget and lifestyle.
Meal planning poses surprisingly complex challenges. Stop for a minute and consider what you’re actually doing when you compile a weekly grocery list. Your brain is simultaneously calculating how many people are eating, the types of foods they enjoy, ingredient preferences (and intolerances), your budget, the time available to cook and so on. No wonder so many weeknights end with mediocre takeout.
Countless approaches have tried to “disrupt” the meal-plan slog: books, magazines, apps, the once-vaunted meal kits, which even delivered the ingredients right to your door. But none could offer truly personalized plans. Could AI succeed where others failed?
I conducted my first tests of AI in the summer of 2023, with mixed results. Early versions of Open AI’s ChatGPT produced some usable recipes. (I still occasionally make its gingery pork in lettuce wraps.) But the shopping lists it created were sometimes missing an ingredient or two. Bots! They’re just like us!
Eager to please, the chatbot also made some comical culinary suggestions. After I mentioned I had a blender, it determinedly steered me to use the blender…for everything, including fried rice, which it recommended I whiz into a kind of gruel. While it provided a competent recipe for pasta with zucchini, thyme and lemon, it thought it would be brilliant to add marshmallows, which I’d mentioned I had in my pantry, to the sauce. As a friend said: “If you’re having AI plan the recipes for you, it should definitely be doing something better than what your stoned friend would make you at two in the morning.”
Early AI could plan meals for the week, but required a lot of hand-holding. Like an overconfident intern.
Eighteen months after those first attempts—about 1,000 years in AI time—I was ready to try again. In January, DeepSeek AI, a Chinese chatbot, grabbed headlines around the world for its capabilities and speed (and potential security risks). There were also new and improved versions of the chatbots I’d found wanting.
This time, I decided to experiment with ChatGPT, Anthropic’s Claude and DeepSeek. (To see how they compared to one another, see “Bytes to Bites,” below.)
From my first AI rodeo, I knew to use short, direct sentences and get very specific about what I wanted. “Think like an experienced family recipe developer,” I told DeepSeek. “Create a week’s worth of dinners for a family of four. At least three meals should be vegetarian. One person doesn’t like fresh tomatoes. We like Italian, Japanese and Mexican cuisine. All meals should be cooked within 60 minutes.”
For the next 24 seconds, the chatbot “reasoned” through my request, spelling out concerns as I watched, rapt: Would the person who doesn’t like fresh tomatoes eat marinara sauce? Black bean and sweet potato tacos are a nice vegetarian entree, but opt for salsa verde to avoid tomatoes. Lemony chicken piccata is fast, but serve with broccolini. It was…amazing. The consolidated shopping list the chatbot provided was error free.
I tried the same prompt with Claude and ChatGPT, with curiously similar results. With all the options in the world, both bots suggested black bean and sweet potato tacos, and chicken piccata. The recipes’ instructions varied, as did suggested side dishes.
I decided to write a more detailed request. “Long prompts are good prompts,” said Dan Priest, chief AI officer for consulting firm PwC in the U.S. The more information you provide, the more the AI can “align with your expectations.” Don’t try to get everything right the first time, Priest said: “Have a conversation.”
Good advice. I admit, when I first began my tests, I was searching for weak spots. But I learned it’s crucial to refine requests. As Priest said, AI will consider your various demands and make trade-offs—though perhaps not the ones you’d make.
So I started talking to AI. I said I like to cook with seasonal ingredients—that my dream dinner is a night at Chez Panisse, the Berkeley restaurant where chef Alice Waters redefined rustic-French cooking as California cuisine. Within seconds I had gorgeous recipes for spring lamb chops with fresh herbs, and miso-glazed cod with spring onions and soba. When I asked to limit the budget to $200, the bot swapped in pork for pricey lamb and haddock for cod. I requested meals that adhered to guidelines from the American Heart Association, and recipes that used only what was in my fridge. No problem.
But would the recipes work? Chatbots don’t have experience cooking; they are Large Language Models trained to predict what word should follow the last. As any cook knows, a recipe that reads well can still end in disaster. To my surprise, the recipes I tested worked exactly as written by the chatbots—and took no longer than advertised. Even my luddite husband called Claude’s rigatoni with butternut squash, kale and brown butter “a keeper.”
As yet no chatbot can compete with Alice Waters—or my husband, for that matter—in the kitchen. (For more on that, see “How Do Real Cooks Rate AI?” below.) But I’ll keep asking AI to, say, create shopping lists for recipes I upload, or come up with a recipe for what I happen to have in the refrigerator—as long as I’m there to whisper in the chatbot’s ear.
Which chatbot is right for your kitchen?
Any of the three chatbots we tested can deliver a working meal plan—if you know how to talk to it. My personal pick was Anthropic’s Claude, for its intelligent tone and creativity, followed by DeepSeek AI for its “reasoning.” AI “agents” such as Open AI’s Operator, can, in theory, order the food needed to cook your recipes, but the consensus is they need a bit more time to develop.
Open AI’s ChatGPT • I had quibbles with ChatGPT’s first round of recipes. The seasoning skewed bland—only one tablespoon of soy sauce for a large veggie stir fry. It had me start by sautéing my chicken piccata, which then got cold while the pasta cooked. ChatGPT was also annoyingly chipper in its interactions. Still, with a few requested revisions, its lemon and pea risotto was perfection.
DeepSeek AI • I was impressed with this chatbot’s “reasoning” and the way it balanced sometimes-conflicting requests. Its recipes were seasonal (without prompting) and easy to follow; its shopping list, error free. Its one unforgivable mistake: presuming a paltry number of stuffed pasta shells would feed my hungry family. Some have voiced security concerns over using a Chinese chatbot; I felt comfortable sharing my meal preferences with it.
Anthropic’s Claude • I felt like Claude “got” me. This encouraged me to chat with it, resulting in recipes I liked and that worked, like a Mexican pozole for winter nights. This bot does need prompting; its initial instructions for brown butter and crispy sage leaves would have flummoxed an inexperienced cook. But when I suggested it offer step-by-step instructions, it praised me, which made me think it was even smarter.
Have a conversation. Even a very specific meal-planning prompt requires AI to make assumptions and choices you might oppose. Ask it to revise. Add additional requirements. Follow up for more specific instructions. Time spent up front will deliver a more successful plan.
Role-play. Ask AI to think like a cook whose food you enjoy. (Told I like writer Tamar Adler’s recipes, Claude instantly offered one for wild mushroom bread pudding.) If you aren’t a skilled cook, it’s probably unwise to ask AI to mimic a three-star chef. Instead, ask it to simplify recipes inspired by your idol.
Read carefully and use common sense. It is always important to read through a recipe before you shop or set up in the kitchen, and this is especially true with AI. Recipes are invented on the fly and not tested. Ask for clarification if necessary, or a rewrite based on your skills, equipment or time.
Ask for a consolidated shopping list. In seconds, AI can aggregate the ingredients for your recipes into a single grocery list. Ask for total pounds or number of packages needed. (This saves you having to figure out, for example, how many red peppers to buy for 2 cups diced.)
Request cook times and visual cues. A good recipe writer lets you know how things will look or feel as they cook. Ask AI for the same. This will improve a vague “Bake for 20 minutes” to “bake for 20 minutes or until golden brown and the cake springs back to the touch.”
We asked AI to create dishes in the style of three favourite cooks, which it does base on text from the Internet and elsewhere it’s been trained on. And then we asked the cooks to judge the results. Verdict: The recipes didn’t reflect our panel’s expertise or attention to detail. Seems AI can’t replace them—yet.
Tamar Adler undefined Trained to cook at seminal restaurants including Prune and Chez Panisse; food writer, cookbook author, podcaster
AI dishes inspired by Tamar: Winter Squash and Wild Mushroom Bread Pudding; Braised Lamb Shoulder With White Beans and Winter Herbs; Pan-Roasted Cod With Leeks and Potatoes
Assessment: “Superficially, the recipes seem great and like recipes I would write.”
Critiques: “So much of everything I’ve written has been geared toward helping cooks build community and capability. Here, a cook is neither digging in and learning by trying and failing and repeating and growing; nor are they talking to another person, exchanging advice, smiles, jokes, ideas, updates.”
GRADE: C
Nik Sharma undefined Molecular biologist turned chef; editor in residence, America’s Test Kitchen; cookbook author
AI dishes inspired by Nik: Black Pepper and Lime Dal With Crispy Shallots; Roasted Spring Chicken With Black Cardamom and Orange; Roasted Winter Squash and Root Vegetables With Maple-Miso Glaze
Assessment: “A bit creepy. It’s trying too hard to imitate me but leaving out my intuition and propensity to experiment.”
Critiques: “Ingredients are not listed in order of use, and quantities and cook times are off. Black cardamom would kill that chicken. Also: I always list volumes for liquids and weights, whenever possible.” (AI did not—but you could ask it to!)
GRADE: C
Andrea Nguyen undefined Leading expert on the cuisine of Vietnam, cookbook author, cooking teacher, creator of Viet World Kitchen
AI dishes inspired by Andrea: Quick Lemongrass Chicken Bowl; Winter Vegetable Banh Mi With Spicy Mayo; Quick-Braised Ginger Pork with Winter Citrus
Assessment: “Machine learning is good for certain things, like getting factual questions answered. AI mined my content near and far, and got some things right but not others. Good recipes contain nuances in instructions that offer visual and taste cues.”
Critiques: “Quantities were off—often way off. The rice bowl is only good for a desperate moment. The ginger pork is an awful mash up of ideas. Yuck.”
GRADE: C/C+
New study finds that CEOs are more likely to be fired for company underperformance if a director has served in the military.
It’s being sold by a Chinese billionaire who’s accumulated a handsome portfolio of lavish real estate in the U.S.