It has been less than a year since OpenAI changed the world overnight with the release of ChatGPT. Since its November launch, the chatbot’s display of generative artificial-intelligence software has triggered a reshuffling of investment priorities in Silicon Valley, on Sand Hill Road, and on Wall Street. Almost every company—and this goes well beyond tech—has prioritised the development and adoption of generative AI.
The notion of artificial intelligence, or computers that can “think,” has been around since the Cold War. What makes generative AI so fresh is the ability to answer questions posed as simple natural-language requests—and respond with rich, creative content in the form of text, music, video, images, or even poetry.
Generative AI promises to democratise the power of large data sets, making it dramatically easier for people and businesses to find information, create content, and analyse data. And yet, AI isn’t magic, despite all appearances to the contrary. The technology is creating widespread worries about the misappropriation of personal information, the misuse of copyright-protected content, and the creation of false and misleading data. Some people even see AI as an existential risk to the future of life on Earth—a recent Time magazine cover asked whether AI will eventually lead to “The End of Humanity.”
To consider the outlook for AI, how it works, where the risks lie, and who will lead the way, Barron’s assembled a panel of five experts who approach AI from divergent angles. Our AI roundtable panelists included Dario Gill, director of research at IBM, which has spent decades working on artificial-intelligence software and hardware; Irene Solaiman, policy director at Hugging Face, a marketplace for AI models, data sets, and software; Cathy Gao, a partner at Sapphire Ventures, which has committed to investing at least $1 billion in AI-focused start-ups; Mark Moerdler, a software analyst at Bernstein Research who completed a doctorate in computer science and artificial intelligence in 1990; and Brook Dane, a portfolio manager at Goldman Sachs who lately has revamped his investment strategy to focus on AI stock plays.
The conversation took place in early August on Zoom. An edited version follows.
Barron’s:Let’s start by framing just how big a deal generative AI is. The biggest thing since the Web? The iPhone? Electricity? The wheel? Dario, IBM has been working on AI for decades—it has been 12 years since Watson’s famous appearance on Jeopardy. So, what has changed?
Dario Gil: IBM has actually been involved in AI since 1956, the year of a famous conference we co-sponsored called the Dartmouth Summer Project on Artificial Intelligence. Arthur Samuel, an IBM computer scientist who did pioneering work in AI, coined the term “machine learning” in 1959. So, yes, the idea of AI has been around for a long, long time. The past decade or so has been the era of deep learning and neural networks, where we discovered that if you could label enough data, you could achieve superhuman levels of accuracy.
But that turned out to be extremely expensive.
Gil: Right. There were only a handful of institutions that could actually amass enough labeled data—say, hand-tagged photographs—to generate good value and a reasonable return on investment. The fundamental reason why there’s so much excitement around AI now is this transition toward “self-supervision.”
Explain that for us.
Gil: The advent of foundational models—the basis of generative AI—allows us to take large amounts of unlabelled data and create very powerful representations of language, code, chemistry, and materials, or even images. And as a consequence of that, once you train these models, the downstream use cases allow you to fine-tune or prompt or engineer them with a fraction of the energy, effort, and resources that historically would have been required to create those use cases. It’s what is unlocking this productivity moment in AI.
Bill Gates has said that ChatGPT was the most impressive technology he’d experienced since he first saw a graphical user interface in 1980—that it was as fundamental as the creation of the microprocessor, the PC, the internet, or the mobile phone. Nvidia [ticker: NVDA] CEO Jensun Huang says AI is having an iPhone moment. Cathy, Sapphire just announced a commitment to invest $1 billion in AI start-ups. Does this moment really feel that big? What is the opportunity that Sapphire sees?
Cathy Gao: In AI, we invest end to end, in everything from the plumbing to how data move through the stack to the application layer. With AI, we are definitely seeing a similar arc to other platform shifts. We believe that this is a significant platform shift. We’re in the early stages of that explosion, headed ultimately to ubiquity.
But why now? What makes this the moment?
Gao: It’s being driven by many things. One of the keys is that the consumer imagination has been captured. ChatGPT reached 100 million users in a groundbreaking two months. You can see a future where AI becomes so ubiquitous that companies no longer market themselves as “AI companies” because they’ve all become AI companies. In part, this is about ease of use, the ability to leverage foundational models via API [application program interface, a protocol for software programs to connect], so you’re not having to rebuild them every time. You’re not having to build these LLMs [large language models] from scratch. And the other element is the end-user experience, which will take us to the next phase of ubiquity.
Mark, you earned a doctorate in artificial intelligence a few decades back. What’s different now?
Mark Moerdler: It feels like 100 years ago. We were learning how to do very basic things with AI. Since then, we’ve seen massive improvements in technology. Underlying computing capabilities have massively expanded. You couldn’t run the types of learning models that you can do today, because the computers couldn’t deal with that capacity. We were using far smaller computers, with less memory, storage, and bandwidth.
And I would agree with Cathy that this is all about conversational AI. Until now, it’s all been under the hood. Now, you can hold a conversation with software in the same way you might talk to a person, and the system will respond to you, maybe with a report, or by creating an image, or simply with an ongoing conversation.
This was all sparked by ChatGPT and a consumer experience, natural-language chat. Will consumers—and advertising—ultimately be the revenue source for this business, or will it be more about a growing market for enterprise applications?
Moerdler: Some of the largest companies in the world—including Microsoft [MSFT] and Alphabet [GOOGL]—are involved in both consumer and enterprise AI software. There will be disruption on the consumer side, in terms of where you search for information. But arguably, the bigger value creation is going to be unlocking the data within enterprises, to leverage that data to drive efficiencies within organizations, make leaps of intuition in coming up with answers, or make decisions faster, or in some cases reach conclusions you couldn’t previously reach because you didn’t have easy access to the data.
Gao: What’s happening in Gen AI on the consumer side and the B2B—or business-to-business—side are highly symbiotic. They’re feeding into each other. There is a huge opportunity for enterprise software companies today, and that’s why you’re seeing a lot of investment. Gen AI is the ultimate double-edged sword. On the one hand, it represents tremendous potential to be transformative, and the key to future growth. But it can also create new competition that could be hard to beat, in some cases creating existential risk for the incumbents.
Gen AI can increase the addressable market for many companies and industries. Take a core system of record like ERP, or electronic medical records, or a payroll service like ADP, which stores a lot of valuable data. Often, the existing customer interface layer limits many potential use cases. Gen AI can be used to reimagine and reinvent workflows, and to open up the addressable markets in a significant way.
Irene Solaiman: It’s important to step back and think about what systems we are discussing, because there are so many language models out there. When we’re talking about generative AI, the way you would do research on or adapt them to a given application is going to different by modality. There’s a lot of chatter around chatbots, but there’s a lot happening with imagery and audio and even video that isn’t the subject of as much research or literature as there is for language. There’s so much opportunity.
Remember, also, that these base systems often aren’t developed for a specific use case. They may be optimized for tasks like code generation. But generally, they can be applied to many different fields, which is exciting. There are also risks; we need to figure out what safeguards we need.
Irene, I was visiting the Hugging Face website and was struck by the number of models and data sets your site offers. This isn’t just about Microsoft, Meta Platforms [META], or Alphabet.
Solaiman: We have almost 300,000 different models, over 100,000 applications, and more than 50,000 data sets. Not all of the models are focused on natural-language processing. There are models for more-specific fields, like biomedical AI. There’s a lot of discussion around advanced models like OpenAI’s GPT4. But that’s not what everyone is going to use. Large language models are computationally expensive to run. At Hugging Face, we’re seeing a lot of researchers use much smaller models that are cheaper to adapt and fine-tune.
That raises questions about where the value lies—and who the winners will be. Brook, it’s your job to identify AI winners. Do you see the value going to those that have the data, or the application vendors, or someone else? How do you approach that when looking for AI-related companies in which to invest?
Brook Dane: It’s incredibly early in this journey, especially when you look beyond the providers of semiconductor and networking infrastructure, like Nvidia, which has a near-monopoly on graphics processors used to train models. When you think about the software layer, it is TBD—to be determined—on some of these things. Early on, though, it appears that this idea that data have gravity and will be the source of competitive advantage appears to be true. We’re focused on that.
Beyond infrastructure, we are spending the bulk of our time on data, and which players can drive value and capture value over time. The other issue is that, unlike some other big tech transitions of the past, you don’t have to rewrite the entire software stack. In other words, I wonder whether there will be as much disruption to the leaders in the marketplace as in previous shifts. The shift to mobile and the internet created a whole new class of companies that rose up and displaced the incumbents. I wonder if this time the incumbents will actually reinforce their power, because they already have the data.
Moerdler: I agree. The speed of building models is very high. We’re talking months, not years. It’s just a matter of money. Differentiation is going to create sustainable value where you can create something trained on unique data and capabilities—and where the uniqueness is sustainable. In traditional software, the moat was created because it took so much time to create the technology. For a competitor to catch up took a really long time. Here, everyone is building capabilities. If you can’t differentiate, you aren’t going to be able to monetise it.
We’ve talked a lot about models and data sets. What differentiates the two?
Moerdler: When people talk about models, there are several types. There are generic models trained on very large data sets, for the purposes of answering more generalised queries, like ChatGPT and Bing. There are specialised models for very specific problems—say, in chemistry or materials sciences. And there’s an enormous amount of data sitting inside companies. Companies may choose to use a more generic model and ground it with their corporate data.
Gao: Let me give you an example to illustrate what Mark is saying. One of our portfolio companies, MoveWorks, is an AI chatbot that cuts across enterprise applications like information technology, service management, and human resources, and adds company-specific data. If a customer has a conference room called Taylor Swift, for instance, and you ask a public chatbot if Taylor Swift is available at 9 a.m., the model is going to get confused. But if the chatbot is infused with information about the company’s conference-room names, it can produce an accurate answer.
Gil: The pattern of consumption is essential for how AI is used in the real world. So, you start with your base model, and then you load your records of, say, past customer exchanges and service documents around that—you’re fine-tuning the model so it incorporates your local data. Productivity gains are linked to that idea. Once you have base models for solving IT problems, all of a sudden your internal team can do 50 or 100 projects a year. In the era of just deep learning, having to label everything by hand, where every model was custom, you could do just four or five projects.
Solaiman: I always use the term “system” instead of model. But I’m so glad to hear all this talk about data. And when we’re thinking about system life cycles, there’s a lot of work, as Dario was saying, that goes into data collation, curation, and governance. An organization is going to train on an open data set that may have been collated and curated by somebody else.
This brings us to the question of why this is all happening now. We have much more impressive systems than we did just a few years ago. We have better techniques and better infrastructure, including more efficient computing, more computing, and more data. And we have better safety research, better fine-tuning of the information, and better accessibility, not just via APIs, but with models that are more compute-efficient, that can run even on local hardware.
In an interview with Barron’s after the latest Palantir [PLTR] earnings call, CEO Alex Karp said that this technological revolution favors the incumbents—unlike previous tech disruption that advantaged new companies. He thinks the winners will be familiar players, not new ones. Brook, you already touched on this idea. Cathy, as an investor in new companies, do you find that discouraging?
Gao: That’s the No. 1 question. Look, the incumbents have scale and capital. They have the computing resources, which are scarce these days. And they have tremendous data. They have key ingredients to be very, very successful around Gen AI. The incumbents are certainly going to be playing an outsize role in this era. I’m talking about hyperscalers, such as Google, Amazon.com [AMZN], Microsoft, and others, that are aggressively investing in this technology. On its latest call, Microsoft mentioned AI 59 times.
That’s even more times than the 53 times that Microsoft said the word “cloud.”
Gao: For an investor like me who is looking for the disrupters, the biggest question—and the biggest risk—when you look at most Gen AI application software companies is, what if Microsoft, or Google, or Adobe [ADBE] does this in the future? Is this new company going to be wiped out? The differentiators will be the same as with any software-as-a-service application. It will be about customer and product experience being deeply embedded into workflows, and that data moat that we talked about earlier.
A lot of the founders I’ve been speaking to over the past couple of months, when asked about Gen AI suddenly blowing up in the past two quarters, always say the same thing. They say, on the one hand, that it has been amazing for the market, with inbound queries just flooding in. But at the same time, it has lowered the barrier to entry for new players. Plus, the hyperscalers like Amazon, Meta, Alphabet, and Microsoft are now paying more attention to this opportunity.
Dane: I agree with everything Cathy just said. In every transition, new companies emerge, and some become large. But there really is a power of incumbency here, because of the need for data, and because you can develop these tools and techniques relatively quickly, the way Microsoft has announced AI software across its software stack. The incumbents do have a huge advantage. It’s going to come down to leadership and execution, as it always does, and especially in a time like this when the market has been through a period in which it has been focused on margin expansion. There’s a level of investment required to do this, and some of the incumbents are going to hesitate to spend what they need to spend to be relevant players. But the advantage starts with incumbency on this transition.
Mark, do you agree?
Moerdler: Yes, but let me add to that. AI is a data-driven learning experience. The more you have access to data, theoretically, the better your product becomes. And therefore, the quicker you can get to market, the more you can absorb in terms of information, the broader the reach—it has somewhat of a self-fulfilling prophecy effect. But as Brook rightly said, it comes down to execution, and there are many companies now that are giving lip service to generative AI rather than the significant focus and investment that may be necessary to create a moated solution.
Dane: As I think about my models and forecasts across the software ecosystem, the ones that execute well in this are going to see a lower churn rate, higher customer retention, and higher upsell and cross-sell into their installed base. You’re starting to see companies for which your degree of confidence in the two-, three-, four-year-out free-cash-flow outlook is structurally higher now. All of this is still super-early, and I’m not sure that it impacts the next 12 months’ cash flows in any material way. But as I think beyond that horizon, I get increased confidence in their ability to be bigger, stronger, faster businesses.
It seems clear that we’re not talking just about the importance of data held by tech companies. Legacy companies in areas such as financial services, pharmaceuticals, and materials have tons of data, too.
Gil: Understanding the moment as a shift in data representation is really important. It may sound a little bit abstract, but it is profound. When the relational database was invented, there was a form of data representation that we’re all accustomed to, of rows and columns. Databases were invented to do that well, transaction processing systems do that well, and it had huge implications for payroll and finance and accounting. Now, imagine instead a graphical data representation. It turns out that graphical representation is essential to do things like search, social media, and so on. You’re going to take the data that you have today, relational databases, graphs, and so on, and map them to this new way to encode information.
So, who gets to be a value creator? Enterprises and governments the world over have the most data. It looks at the moment like all of this is concentrated in about five American companies, but that isn’t how the future is going to evolve, because contrary to popular opinion, and thanks to open-source initiatives, the democratisation of AI is perhaps the most important force at present. Understanding how much simpler it will become to take advantage of these large language models, to adapt them, to create them, will turn out to be the defining trend as it gets internationalised and democratised, and value creation gets more distributed.
Solaiman: That’s one of the reasons I do this work. What we’re building has a lot of potential, but potential for whom? For instance, what are most keyboards optimised for? Latin character alphabets, like English. When I worked at OpenAI, I used to test a lot of the models, not just in English but also in the only non-Latin character language I understand, which is Bangla, the national language of Bangladesh. I got to see Bangla-speaking researchers working in a language deeply underrepresented in natural-language processing. When you make systems work for many different groups of people, opportunities open up. The question from a governance point of view is, how do we make sure data collection isn’t exploitative and appropriately represents every community.
That brings us to an important topic, which is regulation, and mitigating risks and potential harms. There are questions around job loss, intellectual property protection, and deep fakes. Congress has held hearings. Do we need a new regulator? New rules? And how do we do that without reducing the competitive position of U.S. companies relative to those in China or elsewhere?
Moerdler: We’re in a new era. Regulators don’t necessarily have the experience in this area. They are learning as the rest of us are learning exactly how to deal with it. Regulation, like everything, can be a two-edged sword. It can be used to limit bad actions. It could also limit development. There needs to be control to assure governance, privacy, and security, that the systems aren’t misused by bad actors. There needs to be some level of standardization of requirements, of control, and maybe even regulation. But it has to be done in a thoughtful way, or what will end up happening is that you will create an opportunity for companies outside the U.S. to take market share and take advantage.
Irene, what is your sense of this?
Solaiman: Good regulation is hard to do. Regulators wear so many hats. They can’t be experts in AI. But what they are experts in is the public interest. I want to learn from policy makers in which direction they think AI should be going. But it is immensely difficult to regulate. And what systems are we actually talking about? There’s not one single piece of legislation that is going to affect every aspect of AI. Regulators in the U.S., the European Union, the United Kingdom, and Canada are trying. There is an unprecedented level of attention in Congress. Hugging Face is pro regulation, but we want that to be in a way that guides innovation in the right direction. There needs to be better standards, but that means working together closely. There are incredible experts throughout all of these regulatory bodies on what that would look like and how that can be extrapolated to non generative AI systems, as well.
Gil: A framework of precision regulation would serve the industry well. Look at the work the EU did in the past few years. They developed a very thoughtful approach on use cases and risk-adjusted regulatory frameworks. There’s a huge difference between applying AI in a nuclear reactor and applying AI for a pizza-recommendation system. Right? And so risk-adjust, where you categorise how much harm this is likely to cause, or how much risk this is going to induce in society, and use the appropriate regulatory bodies to beef up the expertise.
Enable every agency to become an AI agency, an additional element that they incorporate. This is in contrast to having a single AI regulator that is going to figure out the whole thing. Regulating the technology itself, regulating mathematics, is a really bad idea. And there are people talking about registering the models—that’s the wrong way to go.
Focus on the use cases. Focus on the harm and the impact around that, and regulate using existing bodies against those by beefing up their AI knowledge and expertise and sophistication. Sometimes, the hyperbolic rhetoric that has come even from the tech industry is causing more harm than good. Lowering the tone and focusing around the harm and the damages and the impact, and on those regulatory bodies and the people who are doing that, would be the right way forward for precision regulation.
Cathy, how does the risk of added regulation affect your thinking about where to put Sapphire’s money?
Gao: It’s something we consider closely. We’re still in the very early innings—there are a lot of unknowns. Venture capital is a high-beta asset class by definition. But we want to be smart about the risks we take. When it comes to AI, many of the use cases we’re looking at right now are less likely to be a target of regulatory scrutiny. We’re not looking at companies that affect life or death, like in healthcare. Still, we’re following it very closely. We definitely take that into consideration, but we also accept that some of the unknowns will remain when we make an investment.
Moedler: These systems could be problematic from a privacy point of view, from a bias point of view, from an intellectual-property point of view. Investors need to think through where they could be exposed. It may not be regulators. It may be the fact that, you know what, you trained up these solutions, and the responses they’re giving impinge on other people’s IP, and therefore your clients—and you—are going to get sued. That becomes part of the math you need to do when determining whether these systems are going to become good, sustainable businesses that will generate not just revenue, but also profits, over a long period. Investors need to think carefully about where the exposure can be, whether they’re going to cross a line or create some legal, regulatory, or economic exposure.
One other risk that has been widely discussed is the potential that AI will cost people jobs. Is AI going to be a net job creator—or destroyer?
Gil: We have a couple of hundred years of evidence that the nature of jobs changes over time. A hundred years ago, half of the U.S. population was working in the fields. So, first of all, this phenomenon isn’t new. Whenever really disruptive technology emerges, people think this time will be different. The evidence suggests that won’t be the case. There’s a lot of good analysis that jobs are composed of many, many diverse tasks, and some will be subject to automation while many others won’t. The key metric that people are focused on is whether we can deliver on the productivity promise. With better productivity, we can generate more wealth, and invest in things we care deeply about to create better institutions, a better society, and so on.
I’m more worried about whether we can deliver solutions fast enough to reach the productivity gains we need, and discover solutions to the problems that we face. When I talk to people about advances in AI, semiconductors, and quantum computing, and they are stressed out about the rate of technological evolution, I like to say, look around. I don’t think we’ve run out of problems to solve. And if we can use these technologies to accelerate how quickly we can discover some of these solutions, we are all going to be very well served. One of my fears is definitely not that people won’t have jobs because of the advances in AI. History tells us that.
Solaiman: Just five years ago, one conversation was around how autonomous vehicles would replace drivers and cost the jobs of truck drivers and others. But it turns out, the most adversarial environment is the real world. I’d like to see more research on how we augment and not automate. What will be the impact on the wage distribution? Should people’s wages be reduced if they’re being helped by AI? There are important economic questions.
OK, we have to discuss the notion that generative AI is an existential threat to humanity, as some have warned. It’s worth mentioning here that there’s a difference between generative AI—what chatbots do—and artificial general intelligence, or AGI, the idea that software can be sentient and act on its own, like HAL in the movie 2001: A Space Odyssey.
Gil: I’m very opposed to that language of existential threats because it distorts things in a significant way. First of all, it freaks out our fellow citizens. To some degree, some of the people who espouse that language are behind the scenes aiming at regulatory capture.
Solaiman: A fun fact about me that’s not very public is that I worked on AGI for a while. When I was working on that, a lot of what I was thinking about through my research was, if we’re building these incredibly powerful systems, whose values do they represent? My primary motivator now is to make AI systems work better for underrepresented people in the technical world. A lot of the harms to marginalized people truly are existential to those communities.
But we’re not going to be serving robot masters soon, right?
Moerdler: The more immediate issue is how the AI is used and misused, not whether the AI itself is going to decide to cause damage. That’s the crux of the issue. Worry about how it’s going to be used or misused, because it’s a long time horizon before you have to worry about AI making decisions. People are trying, as Dario said, to blow this out of proportion for other purposes.
Let’s take a few minutes to talk about AI stocks. Brook, when we last talked a few months ago, you walked me through a bunch of non obvious ideas for AI investments. Are you still finding attractive things to buy, despite a big rally in the stocks?
Dane: First, as I’ve said, it’s very early. We’re in the emergence of this technology right now. The landscape is going to change dramatically over the next one, three, five years. Investors have to pay attention to how these things are changing and where opportunities emerge. The second thing is that, in general, there’s going to be considerable differentiation between winners and losers. Right now, the obvious plays are the ones getting revenue today, the picks-and-shovels players, semiconductor components, and networking, and then the big cloud vendors.
We’re at a funny moment, though, where the market has realised that there is going to be a boom in applications, and that there will be a bunch of infrastructure software that gets pulled along with this. There are exciting opportunities, but that isn’t going to move numbers for calendar-year 2023. So, as long as your investment horizon is long enough, you’re likely to see the payoff from this. If you’re trying to manage a portfolio from now to the end of the calendar year, the companies that are seeing the benefit are the very obvious choices that have already moved, like Nvidia and Microsoft and Alphabet.
When Microsoft reported June-quarter earnings a few weeks ago, the market’s reaction was a little tepid. The results didn’t really reflect all of the things they have been saying are coming on the AI front.
Dane: As we’ve moved through this latest earnings period, you saw a lot of companies produce results that have been ahead of expectations or right in line with expectations. Nobody has particularly gotten aggressive about raising guidance, and stocks have sold off into that, because they had large moves into the end of the quarter through June and July. People were expecting some excitement. The excitement is coming in a lot of these names, but just not in the next 90 days.
Microsoft seems incredibly well positioned from our perspective, given what the company is doing with Copilot and Azure. For us, that seems like a compelling opportunity.
Give us a couple of other picks.
Dane: I’m bullish on Marvell Technology [MRVL], which makes chips used in data-centre networking. It will grow right alongside Nvidia. Its AI-related business is around $200 million in revenue, and should double in each of the next couple of years. The stock has moved up, but so have estimates. This is a picks-and-shovels play, where the numbers are going higher.
Another company we like is Adobe, which dominates the creative software market. We’ve been hearing good things about the beta test for its corporate version of Firefly, Adobe’s collection of generative-AI tools. From what we hear from the sales channel, the beta version is doing exceptionally well. One of the biggest advantages that Adobe software offers is that customers will be protected from copyright infringement for their text-to-image software. There’s a little bit of TBD around how big this is—we still don’t have pricing information—but this is one of those situations where the incumbent has an advantage.
And what about Nvidia?
Dane: We have owned it and continue to own it in our large-cap and tech-focused funds. But we’re always managing risk and reward with position sizing; you want to make sure you stay in balance. As the leader in graphics processors, they are in a unique position—they are really benefiting from this wave. The business will do exceptionally well, but valuation has a range of outcomes.
Mark, you wrote a piece recently that asked if we are in an AI bubble. Are we?
Moerdler: We’ve been in an expectation or optimism bubble. The investor community has gotten enthusiastic about the near-term revenue that’s going to be generated by the technology. Again, this technology exploded on the market. Investors looked at it and went, OK, it’s going to generate meaningful revenue in a relatively short period. Expectations moved up, and valuations moved up accordingly. Many management teams started talking about their AI solutions. You could literally watch stock valuations move up the more they talked about AI, even though they weren’t giving you any guidance about when and how much. We’ve seen multiples move up to relatively high ranges, approaching what we’ve seen at peak multiples in recent times, without that line of sight to the revenue-generation possibility.
And so from that perspective, there is a bit of a bubble going on. It’s going to take longer than many people believe for AI to drive meaningful revenue. That doesn’t mean no revenue, but enough to move the needle from a revenue growth perspective or an earnings perspective. It is likely that in most cases, revenue is going to lead earnings here because there’s a lot of investment required to offer AI tools. You’re using them in the cloud. You’re paying for that usage, even if you own it yourself. You’re probably paying a premium right now, because of the GPU [graphics processing unit] shortage. And so, yes, we got a little bit ahead of our skis.
I also don’t think the rising tide will lift all ships equally. It’s going to come down to the companies that are able to create differentiated capabilities, protected against competitive threats—and that have the ability to monetise them. A lot of companies are going to add AI capabilities, and it is going to be, at least in the near term, a cost of doing business. It isn’t going to be monetisable because your competitors are going to add similar capabilities.
As Brook discussed, you need to think about the time horizon. We think of three buckets. There are the companies where you can see differentiation in what they’re offering now. There are companies that are adding AI, but it may just mean a higher cost of doing business, at least for the near term. Longer term, years from now, it could become real. And then there are the companies that will be disrupted. Most companies are in that middle bucket today.
Which companies would be in the first bucket? And the last?
Moerdler: Two of the companies that I put in the winners bucket were just mentioned by Brook—Microsoft and Adobe. I put in the losers list companies offering no-code and low-code software solutions; they are going to face new competition from AI-written code. For the losers, we see a combination of increased customer attrition and pricing pressure. Almost everything else is the middle bucket. For most companies, generative AI won’t be a major differentiator but will be necessary from a competitive positioning perspective. Most of these are jumping on the AI bandwagon, and while they should be able to get functionality to market quickly, it won’t be differentiated and, in many cases, really valuable to customers.
Dane: One thing to note: The opportunities in tech companies are compelling right now, with AI as an option in front of them. Business fundamentals are largely stable. The economy is in better shape than we all thought it would be six or nine months ago. These companies have largely pivoted to driving cash flow and operating income instead of chasing growth for growth’s sake. And then you have this optionality around AI.
Moerdler: Agreed. If your focus is on the value of the business, and the upside from AI, you’re going to get better a risk-reward in terms of your investment profile than if you jump on the all-about-AI ship, because it may just take longer until that revenue comes to fruition.
While tech stocks have had a big year, and everyone is talking about AI, there haven’t been any AI initial public offerings, or really any IPOs in tech. Cathy, what does that say about where we are in the development of the AI sector?
Gao: When the general IPO markets will unfreeze for tech is the million dollar question. I have no idea. In any case, it’s going to take a while before we see pure-play AI companies come public. The speed of adoption that we’re seeing in this cycle with AI has outstripped anything that I’ve seen in prior platform shifts. But maybe there’s something we can learn from the internet revolution that could be applied to the current era. In the internet era, the first wave of companies that came out weren’t the ones that ultimately succeeded. It was more the second wave and the third wave that watched their predecessors, learned from their mistakes, refined, rehoned, and went out. My gut is telling me that this is going to take a while.
Everyone, thanks for a fascinating conversation.
Chris Dixon, a partner who led the charge, says he has a ‘very long-term horizon’
Couples find that lab-grown diamonds make it cheaper to get engaged or upgrade to a bigger ring. But there are rocky moments.
Wedding planner Sterling Boulet has some advice for brides-to-be regarding lab-grown diamonds, which cost a fraction of the natural ones.
“If you’re trying to get your man to propose, they’ll propose faster if you offer this as an option,” says Boulet, of Raleigh, N.C. Recently, she adds, a friend’s fiancé “thanked me the next three times I saw him” for telling him about the cheaper lab-made option.
Man-made diamonds are catching on, despite some lingering stigma. This year was the first time that sales of lab-made and natural mined loose diamonds, primarily used as center stones in engagement rings, were split evenly, according to data from Tenoris, a jewellery and diamond trend-analytics company.
The rise of lab-made stones, however, is bringing up quirks alongside the perks. Now that blingier engagement rings—above two or three carats—are more affordable, more people are dealing with the peculiarities of wearing rather large rocks.
Esther Hare, a 5-foot-11-inch former triathlete, sought out a 4.5-carat lab-made oval-shaped diamond to fit her larger hands as a part of her vow renewal in Hawaii last year. It was a far cry from the half-carat ring her husband proposed with more than 25 years ago and the 1.5-carat upgrade they purchased 10 years ago. Hare, 50, who lives in San Jose, Calif., and works in high tech, chose a $40,000 lab-made diamond because “it’s nuts” to have to spend $100,000 on a natural stone. “It had to be big—that was my vision,” she says.
But the size of the ring has made it less practical at times. She doesn’t wear it for athletic training and swaps in her wedding band instead. And she is careful to leave it at home when traveling. “A lot of times I won’t take it on vacation because it’s just a monster,” she says.
The average retail price for a one-carat lab-made loose diamond decreased to $1,426 this year from $3,039 in 2020, according to the Tenoris data. Similar-sized loose natural diamonds cost $5,426 this year, compared with $4,943 in 2020.
Lab-made diamonds have essentially the same chemical makeup as natural ones, and look the same, unless viewed through sophisticated equipment that gauges the characteristics of emitted light.
At Ritani, an online jewellery retailer, lab-made diamond sales make up about 70% of the diamonds sold, up from roughly 30% two years ago, says Juliet Gomes, head of customer service at the company, based in White Plains, N.Y.
Ritani sometimes records videos of the lab-diamonds pinging when exposed to a “diamond tester,” a tool that judges authenticity, to show customers that the man-made rocks behave the same as natural ones. “We definitely have some deep conversations with them,” Gomes says.
Not all gem dealers are rolling with these stones.
Philadelphia jeweller Steven Singer only stocks the natural stuff in his store and is planning a February campaign to give about 1,000 one-carat lab-made diamonds away free to prove they are “worthless.” Anyone can sign up online and get one in the mail; even shipping is free. “I’m not selling Frankensteins that were built in a lab,” Singer says.
Some brides are turned off by the larger bling now allowed by the lower prices.When her now-husband proposed with a two-carat lab-grown engagement ring, Tiffany Buchert, 40, was excited about the prospect of marriage—but not about the size of the diamond, which she says struck her as “costume jewellery-ish.”
“I said yes in the moment, of course, I didn’t want it to be weird,” says the physician assistant from West Chester, Pa.
But within weeks, she says, she fessed up, telling her fiancé: “I think I hate this ring.”
The couple returned it and then bought a one-carat natural diamond for more than double the price.
When Boulet, the wedding planner in Raleigh, got engaged herself, she was over the moon when her fiancé proposed with a 2.3 carat lab-made diamond ring. “It’s very shiny, we were almost worried it was too shiny and was going to look fake,” she says.
It doesn’t, which presents another issue—looking like someone who really shelled out for jewellery. Boulet will occasionally volunteer that her diamond ring came from a lab.
“I don’t want people to think I’m putting on airs, or trying to be flashier than I am,” she says.
For Daniel Teoh, a 36-year-old software engineer outside of Detroit, buying a cheaper lab-made diamond for his fiancée meant extra room in his $30,000 ring budget.
Instead of a bigger ring, he got her something they could both enjoy. During a walk while on an annual ski trip to South Lake Tahoe, Calif., Teoh popped the question and handed his now-wife a handmade wooden box that included a 2.5-carat lab-made diamond ring—and a car key.
She put on the ring, celebrated with both of their sisters and a friend, who was the unofficial photographer of the happy event, and then they drove back to the house. There, she saw a 1965 Mustang GT coupe in Wimbledon white with red stripes and a bow on top.
Looking back, Teoh says, it was still the diamond that made the big first impression.
“It wasn’t until like 15 minutes later she was like ‘so, what’s with this key?’” he adds.
Chris Dixon, a partner who led the charge, says he has a ‘very long-term horizon’