انتقل إلى وضع عدم الاتصال باستخدام تطبيق Player FM !
Big Debates: The AI Evolution
Manage episode 460463573 series 2535893
In the first of a special series, Morgan Stanley’s U.S. Thematic and Equity Strategist Michelle Weaver discusses new frontiers in artificial intelligence with Keith Weiss, Head of U.S. Software Research.
----- Transcript -----
Michelle: Welcome to Thoughts on the Market I'm Michelle Weaver, Morgan Stanley's U.S. Thematic and Equity Strategist.
Keith: And I'm Keith Weiss, Head of U.S. Software Research.
Michelle: This episode is the first episode of a special series we’re calling “Big Debates” – where we dig deeper into some of the many hot topics of conversation going on right now. Ideas that will shape global markets in 2025. First up in the series: Artificial Intelligence.
It's Friday, January 10th at 10am in New York.
When we look back at 2024, there were three major themes that Morgan Stanley Research followed. And AI and tech diffusion were among them. Throughout last year the market was largely focused on AI enablers – we’re talking semiconductors, data centers, and power companies. The companies that are really building out the infrastructure of AI.
Now though, as we’re looking ahead, that story is starting to change.
Keith, you cover enterprise software. Within your space, how will the AI story morph in 2025?
Keith: I do think 2025 is going to be an exciting year for software [be]cause a lot of these fundamental capabilities that have come out from the training of these models, of putting a lot of compute into the Large Language Models, those capabilities are now being built into software functionality. And that software functionality has been in the market long enough that investors can expect to see more of it come into results. That the product is there for people to actually buy on a go forward basis.
One of the avenues of that product that we're most excited about heading into 2025 is what we're calling agentic computing, where we're moving beyond chatbots to a more automated proactive type of interface into that software functionality that can handle more complex problems, handle it more accurately and really make use of that generative AI capability in a corporate or in an enterprise software setting as we head into 2025.
Michelle: Could you give us an example of what agentic AI is and how might an end user interact with it?
Keith: Sure. So, you and I have been interacting with chatbots a lot to gain access to this generative AI functionality. And if you think about the way you interact with that chatbot, right, you have a prompt, you have a question. You have to come up with the question. going to take that question and it's going to, try to contextually understand the nature of that question, and to the best of its ability it's going to give you back an answer.
In agentic computing, what you're looking for is to add more agency into that chatbot; meaning that it can reason more over the overall question. It's not just one model that it's going to be using to compose the answer. And it's not just the composition of an answer where the functionality of that chatbot is going to end. There's actually an ability to execute what that answer is. So, it can handle more complex problems.
And it could actually automate the execution of the answer to those problems.
Michelle: It sounds like this tech is going to have a massive impact on the workplace. Have you estimated what this could do to productivity?
Keith: Yeah, this is -- really aligns to the work that we did actually back in 2023, where we did our AI index, right. We came up with the conclusion that given the current capabilities of Large Language Models, 25 per cent of U.S. occupations are going to be impacted by these technologies. As the capabilities evolve, we think that could go as high as 45 per cent of U.S. labor touched by these productivity enhancing. Or, sort of, being replaced by these technologies. That equates to, at the high end, $4 trillion of labor that's being augmented or replaced on a go forward basis. The productivity gains still yet to be seen; how much of a productivity gain you could see on average. But the numbers are massive, right, in terms of the potential because it touches so much labor.
Michelle: And finally on agentic, is the market missing anything and how does your view differ from the consensus?
Keith: I think part of what the market is missing is that these agentic computing frameworks is not just one model, right? There's typically a reasoning engine of some sort that's organizing multiple models, multiple components of the system that enable you to -- one, handle more complex queries, more complex problems to be solved, lets you actually execute to the answer. So, there's execution capabilities that come along with that. And equally as important, put more error correction into the system as well. So, you could have agents that are actually ensuring you have a higher accuracy of the answer.
It's the sugar that's going to make the medicine go down, if you will. It's going to make a lot easier to adopt in enterprise environments. I think that's why we're a little bit more optimistic about the pace of adoption and the adoption curves we could see with agentic computing despite the fact it's a relatively early-stage technology.
Michelle: You just mentioned Large Language Models, or LLMs; and one barrier there has been training these models. It requires a ton of computing power, among other constraints. How are companies addressing this, and what's in the cards for next year?
Keith: So, if you think about the demand for that compute in our mind comes from two fundamental sources. And as a software analyst, I break this down into research versus development, right? Research is investment that you make to find core fundamental capabilities.
Development is when you take those capabilities and make the investment to create product out of it. Thus far, again, the primary focus has been on the training side of the equation.
I think that part of the equation looks to be asymptotic to a certain extent. The – what people call the scaling laws, the amount of incremental capability that you're getting from putting more compute at the equation is starting to come down.
What people are overlooking is the amount of improvement that you could see from the development side of the equation. So, whereas the demand for GPUs, the demand for data center for that pure training side of the equation might start to slow down a little bit, I think what we're going to see expand greatly is the demand for inference, the demand to utilize these models more fully to solve real business problems.
In terms of where we're going to source this; there are constraints in terms of data center capacity. The companies that we cover, they've been thinking about these problems for the past decade, right? And they have these decade long planning cycles. They have good visibility in terms of being able to meet that demand in the immediate future. But these questions on how we are going to power these data centers is definitely top of mind for our companies, and they're looking for new sources of power and trying to get more creative there.
The pace with which data centers can be built out is a fundamental constraint in terms of how quickly this demand can be realized. So those supply constraints I don't think are going to be a immediate limiter for any of our names when we're thinking about calendar [20]25. But definitely, part of the planning process and part of the longer-term forecasting for all of these companies in terms of where are they going to find all this fundamental resource – because whether it's training or inference, still a lot of GPUs are going to be needed. A lot of compute is going to be needed.
Michelle: Recently we've been hearing about so called artificial general intelligence or AGI. What is it? And do you think we're going to see it in 2025?
Keith: Yeah, so, AGI is the – it's basically the holy grail of all of these development efforts. Can we come up with models that can reason in the human world as well as we can, right? That can understand the inputs that we give it, understand the domains that we're trying to operate in as well or better than we can, so it can solve problems as effectively and as efficiently as we can.
The easiest way to solve that systems integration problem of like, how can we get the software, how could we get the computers to interact with the world in the way that we do? Or get all the impact that we do is for it to replicate all those functionalities. For it to be able to reason over unstructured text the same way we do. To take visual stimuli the same way that we do. And then we don't have to take data and put into a format that's readable by the system anymore.
2025 is probably too early to be thinking about AGI, to be honest. Most technologists think that there's more breakthroughs needed before the algorithms are going to be that good; before the models are going to be that good.
There's very few people who think Large Language Models and the scaling of Large Language Models in themselves are going to get us to that AGI. You're probably talking 10 to 20 years before we truly see AGI emerge. So, 2025 is probably a little bit too early.
Michelle: Well, great, Keith. Thank you for taking the time to talk and helping us kick off big debates. It looks like 2025 we'll see some major developments in AI.
And to our listeners, thanks for listening. If you enjoy Thoughts on the Market, please leave us a review wherever you listen to the show and share the podcast with a friend or colleague today.
1288 حلقات
Manage episode 460463573 series 2535893
In the first of a special series, Morgan Stanley’s U.S. Thematic and Equity Strategist Michelle Weaver discusses new frontiers in artificial intelligence with Keith Weiss, Head of U.S. Software Research.
----- Transcript -----
Michelle: Welcome to Thoughts on the Market I'm Michelle Weaver, Morgan Stanley's U.S. Thematic and Equity Strategist.
Keith: And I'm Keith Weiss, Head of U.S. Software Research.
Michelle: This episode is the first episode of a special series we’re calling “Big Debates” – where we dig deeper into some of the many hot topics of conversation going on right now. Ideas that will shape global markets in 2025. First up in the series: Artificial Intelligence.
It's Friday, January 10th at 10am in New York.
When we look back at 2024, there were three major themes that Morgan Stanley Research followed. And AI and tech diffusion were among them. Throughout last year the market was largely focused on AI enablers – we’re talking semiconductors, data centers, and power companies. The companies that are really building out the infrastructure of AI.
Now though, as we’re looking ahead, that story is starting to change.
Keith, you cover enterprise software. Within your space, how will the AI story morph in 2025?
Keith: I do think 2025 is going to be an exciting year for software [be]cause a lot of these fundamental capabilities that have come out from the training of these models, of putting a lot of compute into the Large Language Models, those capabilities are now being built into software functionality. And that software functionality has been in the market long enough that investors can expect to see more of it come into results. That the product is there for people to actually buy on a go forward basis.
One of the avenues of that product that we're most excited about heading into 2025 is what we're calling agentic computing, where we're moving beyond chatbots to a more automated proactive type of interface into that software functionality that can handle more complex problems, handle it more accurately and really make use of that generative AI capability in a corporate or in an enterprise software setting as we head into 2025.
Michelle: Could you give us an example of what agentic AI is and how might an end user interact with it?
Keith: Sure. So, you and I have been interacting with chatbots a lot to gain access to this generative AI functionality. And if you think about the way you interact with that chatbot, right, you have a prompt, you have a question. You have to come up with the question. going to take that question and it's going to, try to contextually understand the nature of that question, and to the best of its ability it's going to give you back an answer.
In agentic computing, what you're looking for is to add more agency into that chatbot; meaning that it can reason more over the overall question. It's not just one model that it's going to be using to compose the answer. And it's not just the composition of an answer where the functionality of that chatbot is going to end. There's actually an ability to execute what that answer is. So, it can handle more complex problems.
And it could actually automate the execution of the answer to those problems.
Michelle: It sounds like this tech is going to have a massive impact on the workplace. Have you estimated what this could do to productivity?
Keith: Yeah, this is -- really aligns to the work that we did actually back in 2023, where we did our AI index, right. We came up with the conclusion that given the current capabilities of Large Language Models, 25 per cent of U.S. occupations are going to be impacted by these technologies. As the capabilities evolve, we think that could go as high as 45 per cent of U.S. labor touched by these productivity enhancing. Or, sort of, being replaced by these technologies. That equates to, at the high end, $4 trillion of labor that's being augmented or replaced on a go forward basis. The productivity gains still yet to be seen; how much of a productivity gain you could see on average. But the numbers are massive, right, in terms of the potential because it touches so much labor.
Michelle: And finally on agentic, is the market missing anything and how does your view differ from the consensus?
Keith: I think part of what the market is missing is that these agentic computing frameworks is not just one model, right? There's typically a reasoning engine of some sort that's organizing multiple models, multiple components of the system that enable you to -- one, handle more complex queries, more complex problems to be solved, lets you actually execute to the answer. So, there's execution capabilities that come along with that. And equally as important, put more error correction into the system as well. So, you could have agents that are actually ensuring you have a higher accuracy of the answer.
It's the sugar that's going to make the medicine go down, if you will. It's going to make a lot easier to adopt in enterprise environments. I think that's why we're a little bit more optimistic about the pace of adoption and the adoption curves we could see with agentic computing despite the fact it's a relatively early-stage technology.
Michelle: You just mentioned Large Language Models, or LLMs; and one barrier there has been training these models. It requires a ton of computing power, among other constraints. How are companies addressing this, and what's in the cards for next year?
Keith: So, if you think about the demand for that compute in our mind comes from two fundamental sources. And as a software analyst, I break this down into research versus development, right? Research is investment that you make to find core fundamental capabilities.
Development is when you take those capabilities and make the investment to create product out of it. Thus far, again, the primary focus has been on the training side of the equation.
I think that part of the equation looks to be asymptotic to a certain extent. The – what people call the scaling laws, the amount of incremental capability that you're getting from putting more compute at the equation is starting to come down.
What people are overlooking is the amount of improvement that you could see from the development side of the equation. So, whereas the demand for GPUs, the demand for data center for that pure training side of the equation might start to slow down a little bit, I think what we're going to see expand greatly is the demand for inference, the demand to utilize these models more fully to solve real business problems.
In terms of where we're going to source this; there are constraints in terms of data center capacity. The companies that we cover, they've been thinking about these problems for the past decade, right? And they have these decade long planning cycles. They have good visibility in terms of being able to meet that demand in the immediate future. But these questions on how we are going to power these data centers is definitely top of mind for our companies, and they're looking for new sources of power and trying to get more creative there.
The pace with which data centers can be built out is a fundamental constraint in terms of how quickly this demand can be realized. So those supply constraints I don't think are going to be a immediate limiter for any of our names when we're thinking about calendar [20]25. But definitely, part of the planning process and part of the longer-term forecasting for all of these companies in terms of where are they going to find all this fundamental resource – because whether it's training or inference, still a lot of GPUs are going to be needed. A lot of compute is going to be needed.
Michelle: Recently we've been hearing about so called artificial general intelligence or AGI. What is it? And do you think we're going to see it in 2025?
Keith: Yeah, so, AGI is the – it's basically the holy grail of all of these development efforts. Can we come up with models that can reason in the human world as well as we can, right? That can understand the inputs that we give it, understand the domains that we're trying to operate in as well or better than we can, so it can solve problems as effectively and as efficiently as we can.
The easiest way to solve that systems integration problem of like, how can we get the software, how could we get the computers to interact with the world in the way that we do? Or get all the impact that we do is for it to replicate all those functionalities. For it to be able to reason over unstructured text the same way we do. To take visual stimuli the same way that we do. And then we don't have to take data and put into a format that's readable by the system anymore.
2025 is probably too early to be thinking about AGI, to be honest. Most technologists think that there's more breakthroughs needed before the algorithms are going to be that good; before the models are going to be that good.
There's very few people who think Large Language Models and the scaling of Large Language Models in themselves are going to get us to that AGI. You're probably talking 10 to 20 years before we truly see AGI emerge. So, 2025 is probably a little bit too early.
Michelle: Well, great, Keith. Thank you for taking the time to talk and helping us kick off big debates. It looks like 2025 we'll see some major developments in AI.
And to our listeners, thanks for listening. If you enjoy Thoughts on the Market, please leave us a review wherever you listen to the show and share the podcast with a friend or colleague today.
1288 حلقات
كل الحلقات
×مرحبًا بك في مشغل أف ام!
يقوم برنامج مشغل أف أم بمسح الويب للحصول على بودكاست عالية الجودة لتستمتع بها الآن. إنه أفضل تطبيق بودكاست ويعمل على أجهزة اندرويد والأيفون والويب. قم بالتسجيل لمزامنة الاشتراكات عبر الأجهزة.