AI at work: Inside the new era of hedge fund research and trading

Episode 2 March 25, 2026 00:48:24
AI at work: Inside the new era of hedge fund research and trading
Hedge Fund Huddle
AI at work: Inside the new era of hedge fund research and trading

Mar 25 2026 | 00:48:24

/

Show Notes

How are today’s hedge funds really using artificial intelligence — beyond the hype? In this episode, we sit down with Andrew Delaney, President of A‑Team Group and Nishant Gurnani, Partner and Quant Researcher of Versor Investments for a practical look at how AI is transforming research, alternative data, strategy design, and risk management. From treating AI agents as “junior analysts” to building proprietary model stacks and navigating crowded trades, our guests unpack how technology is reshaping modern investment workflows and the competitive edge in markets today.

Chapters

View Full Transcript

Episode Transcript

Jamie: Hello everyone, and welcome to another episode of Hedge Fund Huddle With Me, your loyal host Jamie McDonald. And today we are talking about the tiny topic of AI. Now, more specifically, we're talking about the practical uses of AI and how people are using it in terms of investing and trading today. And luckily, I'm joined with two experts to help me out. They are Andrew Delaney, who is president of the A-team Group, and Nishant Gurnani, who's a partner at Versor Investments. Gentlemen, welcome to the show. Andrew: Thanks for having us, Jamie. Nishant: Yeah, really great to be here. Jamie: Good. Well, guys, I like to start by just getting a bit of background from our guests about how they started their work careers and how they got to where they are today. So, Andrew, why don't we start with you? Andrew: Sure thing. Again, thanks for the opportunity to share everything here. A-team Group is an online publisher and I'm a journalist by trade. What A-team does is it focuses on the business of data and technology in global financial markets. We've got four main areas, one of which is trading tech. And we cover a lot of the use of AI in the trading and investment workflows in that segment. And for each segment we have, we offer sort of a news analysis, conferences, webinars and so on and so forth. So lots and lots of free content. And I look after most of that. But as I say, I'm a journalist by trade. But I started my career, with data very much at the forefront of what I was doing straight out of college. I got a job, luckily, as a news assistant at the Wall Street Journal. And as part of that job, the deal was basically, we'll teach you how to be a journalist, but you've got to deal with data. In the back of our book. So my job was basically to take what was then known as a telerate terminal. It's a little video screen sat on my desk with a little keypad. And every evening just before midnight, which is when we put the newspaper to bed, I would punch out the government bond prices from counterfeits gerald on a little printer. And then I'd turn around to a screen on my desk, another screen, and punch those numbers into the galley at the back of the book. And it was fairly sort of a menial task and probably not as glamorous as I'd like it to be. But it taught me two really important things about data. And the first one was the importance really of exclusive or difficult to find information and analysis and insights. Basically this was the European edition of the Wall Street Journal. The people who bought the newspaper, many of them bought the newspaper for those bond prices, because you couldn't get them anywhere else in Europe. This was pre-internet, pre just about anything else electronic. Unless you had a teletype terminal, you couldn't get those bond prices. So exclusive data very important. The second lesson I learned was the importance of integration. And of course, although the Wall Street Journal and Telerate were both owned at the time by Dow Jones, I was the integration layer. There was no integration between the two systems that we were using. And so I just used to punch in seven, six, seven, six, seven, six, seven, seven and get those pages, print them out and get those bond prices, tap them back into the galleys for the paper. Andrew: That was the level of integration we had at the time. And so that's how I got my start in this career, sort of moving forward a bit. I ended up in New York for 20 years. Initially, I was the launch editor of a publication called Inside Market Data, which became sort of the Bible of market data. And then later in 2001, we launched A-team and A-team, as I mentioned, lots about data, market data, reference data, etc. but to bring us full circle to this podcast, last year we acquired the alternative data conference business of Eagle Alpha. We are now running the Eagle Alpha events. But as part of that, this sort of connection between alternative data and AI became a very important part of what we're doing. A lot of alternative data services are unstructured, and we found that as AI became more accepted, AI models could be used to add structure to alternative data services, making them more important. So that's become a very important part of what we do. So a little bit of a roundabout way of getting here. But that's how I ended up on this podcast. Jamie: Thank you Andrew. And a good little history lesson on how far we've come. I mean, in our lifetime to be punching bond numbers into newspapers. Nishant, kind of same question to you before we talk specifically about versa. Perhaps you can give a little bit of a background about your career and how you got there. Nishant: Sure. So I'm Nishant Gurnani. I lead the features and FX strategies at Vesor, which is a large quantitative, systematic investor. My background was pretty traditional from purposes of being a quant. I was a geeky kid who loved math and science and computers growing up. And I studied math in college and statistics in graduate school and then in college I was very fortunate to be able to spend two summers at some very, very well known quant shops. I spent a summer at AQR, and then I spent a summer at what was then called SAC Multi Quant, but is now called Cubist. In terms of my background on data and AI specifically, I took this class a long time ago as an undergrad that sort of changed my life. It was a year long sequence on AI and machine learning taught by two luminaries in the field, and it was very clear to me that this was going to be the cutting edge of what needed to be done, and the progress we've made since then has sort of been pretty incredible. Finance specific one of the jokes that my friends used to crack in college was that normal people would list other careers, that they might be like a doctor, a lawyer, and I would only list finance careers. So I've always been interested in finance. And before Versor, I spent a little bit of time out in San Francisco working for a fintech, again, focussed highly on alternative data sets, specifically with regards to underwriting. Jamie: Nishant, giving that introduction. Did you ever just start trading stocks or indices yourself? Did you ever just think I could try and do this myself? Nishant: I definitely did stocks as a teenager and definitely not indices. I don't think I was that mature in my development just yet, but definitely stocks and definitely some other harebrained ideas of things that I potentially could have traded and done. Jamie: Okay. Well, let's get into today's topic. We're talking about the practical sides of AI rather than anything more theoretical, really. What tools are being used today by hedge fund managers, by personal traders to either filter ideas or monitor ideas or help them with risk profiles? And, Andrew, perhaps we'll start with you. Going back five years, we were really talking, I guess, specifically about just generative AI, large language models, helping people to cypher and filter better ideas. But what's happened over the past, let's say, 3 to 5 years to where we are today in terms of how AI is being used. Andrew: Sure thing. Well, we've been following AI for a bit longer than that. We saw things like machine learning, deep learning coming through, I'd say probably 6 or 7 years ago. And then of course, we had the launch of ChatGPT, the birth of generative AI. I would say since then we've been following that pretty closely. We've conducted probably 6 or 7 market surveys over the past couple of years looking at how people are using this data. We run a number of advisory boards where we take practitioners in our marketplace to lunch and pick their brain over something nice to eat. And from all those activities, we've really sort of whittled things down to three types of use cases that we are seeing in the marketplace. And these are efficiency, growth and control type use cases in terms of efficiency use cases these are things like summarising meetings and actions. Things like using AI to code more efficiently, test code more efficiently, and to extract data from unstructured documents and like alternative data sources. In terms of the growth type use cases, we're seeing things people using AI for asset allocation, investment modelling. They're using AI tools to drive client retention and really looking to identify cross-selling opportunities. And then finally, on the control side, we're looking at things like risk modelling, stress testing, scenario planning, some credit and market risk assessment and regulation interpretation and to a little extent digitising of contracts. So they're the kinds of things we're seeing. In terms of models being used, obviously all the household names as they are now from co-pilot increasingly Claude and so on and so forth. But I think the real action in the hedge fund area is around developing your own AI stacks, own large language models, and even more specialised models to add that secret sauce. So that's sort of been the development that we've seen after the past two years, I would say. Jamie: Well great Andrew. And Nishant, over to you. Perhaps you could start by talking a little bit about Versor, which strikes me as a very AI driven boutique hedge fund. You can talk about which strategies you employ, and then once you've spoken about Vesor, perhaps you could just go into a little detail of how you're using AI specifically today. Nishant: Sure. So first is a systematic, quantitative investment firm based in New York, and our focus is exclusively on absolute return strategies. We are purposely designed as a systematic, research driven boutique, and alternative data and AI have sort of been our pillars from very early on. So we started working with alternate data very early on. And one of the things that we're going to get into is that there is no extracting insights from alternative data without having AI techniques there. You can't construct sentiment scores from text unless you use natural language processing methodology in that. So this has sort of been a core part of what we're focussed on. We have three main strategies that we run. We run a systematic equity strategy a event driven strategy that focuses on merger arbitrage, and then a managed futures strategies, which I'm personally in charge of. And when it comes to our philosophy on alternative data and AI, this sort of cascades firmwide. So I want to talk about some of the sort of specific examples that we use on a day to day basis, some which Andrew alluded to, but I want to sort of use some specific examples that we think of. So our philosophy is fundamentally we are systematic investors, and our job is to speed up the velocity with which we get and are able to make good investment decisions. That sort of framework is generalised so that it is not specific to quants or discretionary. Nishant: Our goal is to get good actionable investment ideas. And so AI is used on a day-to-day basis in helping us do that. First and foremost on speeding up research, asking more detailed questions, helping us organise our day to day management. We use a tool called Motion.ai that sort of dynamically adjusts tasks and projects based on priority. So all we see is that throughout our investment process, these efficiency gains. Even though they may not be super glamorous on an individual basis, they compound so that we're able to do things that you know, you're not able to do before. In terms of the research specifically, there are really two big ways that we see that AI is impacting us. One is the speed at which we're able to evaluate research ideas has increased significantly. So if I have a question like, I want to know the number of dissenting votes in the FOMC meetings going back 20 years, that's a question I could have answered before AI. But the speed at which I can answer that right now, using either off the shelf large language models or fine tuned models that we fine tuned ourselves has gone up significantly. The second thing that we're able to do is tackle a complexity of research, ideas that we were unable to do previously. So as a very concrete example, consider an idea that you have. Nishant: And the idea is currently there are a lot of high quality podcasts that upload on a daily basis, where investors come on and give a lot of interesting colour on market sentiment and their views. And so maybe your investment thesis is I want to get some sort of consensus understanding of what people are saying. 15 years ago this was very difficult to not possible. 2012 we saw the big vision paper come out of AlexNet. 2018 is when we see Google launch the Bert paper. 2017 I was at a conference where actually the "Attention Is All You Need" transformers paper was announced, and even this idea of summarising, synthesising a vast amount of market sentiment from podcasts would have been impossible then and today using Claud, using the latest large language models. I could do it in a weekend and not even a weekend in a few hours, I could make a large, large amount of progress on there. So I think now the question has really become from the investment process, not about are you using AI? I think all knowledge work in general requires using AI meaningfully in order to speed up efficiency. But the question is really around how are you using it? Where is it providing the most value, and how is it fitting into your investment process in general? Jamie: I have two quick follow up questions on that. Firstly, one thing that struck me as you were talking was when, again, this is going back to when I was running a book myself. Conviction was kind of everything. When it came to an investment idea, I needed to personally have conviction in an idea to know when to add or when to take positions off. And part of getting the conviction is the friction that you feel putting work into an idea. It was reading the 10-Ks, it was meeting with management and it could only really come from within. Now, if you're relying on AI, obviously your conviction can only be as high as the conviction you have in the AI platform. So question number one is if conviction is still a big part of the trading, how do you get conviction in AI. And part two of that is, as Andrew was alluding to, there's a lot of platforms out there. You mentioned Claude and Motion.AI and Gemini. Do you rely on third party AI platforms, or have you started to kind of build your own, and which do you see as more useful to you? Nishant: Sure. So with regards to conviction, the way we think about it is the AI agents, and we've been spending a lot of time on building out our agentic capabilities should be thought of as junior researchers, and their goal is to help you get to decisions faster. But the conviction still has to come from you. So let's sort of give some concrete examples in the past, if you're a junior researcher who joins our research team, one of the projects that you might be asked to do is I, as the lead, have a specific academic article that I've read, maybe in the Journal of Financial Economics. I think the idea is interesting, and your job would be to read the article, implement the idea, test it using our internal evaluation framework, and make an argument in a research case for why that signal is predictive and should be added to the strategy. What we can do with AI tools is we can systematise this process. So not only do we have researchers doing this work, but we can have AI agents automatically read papers, suggest ideas, but then they go through the natural research process where the PM or strategy lead still has to evaluate it on a rigorous basis. The thing with using AI tools in general is evals are really important. There are constantly new models coming out. So two weeks ago, we saw Opus 4.6 on Claude. OpenAI launched Codex 5.3. So we're constantly seeing these new things come along. And so the conviction and confidence in these models is a function of structured evals process. Nishant: So leading it back to what you said, Jamie, in your context, what you would do is you tracked insurance stocks if I remember correctly, and you would have a historical data set that you've worked with, where you set, and you did the work and you did the effort, and every time a new model would come out, you would get to score it on that. And depending on the quality of the score, that's the conviction level you have in that particular agent that is using the tool. So a really fun example for everybody who's listening to the podcast is go into your own favourite LLM that you like and try this very simple evaluation. So just go, "so my car needs to be cleaned. The car wash is 50m away. Do I walk or drive?" And if you just try something really simple as this, like very simple reasoning, you're going to be shocked by some of the answers that you get. And if you go back models as you can click through, you'll see how it's getting better. So conviction is still very human driven, but getting the idea to a point where the human can start working with it and think of an actionable idea. That's the velocity I'm sort of talking about where I have a random idea. I read this paper. I don't necessarily have the time to spend a week looking super deep into the paper, because maybe my initial conviction on the idea is low. I can have Claude in the background running, and I explicitly do. Nishant: Right now I have Claude running on a couple of different problems that I'm looking at, and it will give me back enough structured output that as a human, then I can say actually, this is really interesting. Let me go and pursue this further. This has legs to it. This idea actually, thanks, Claude. You did the work. I've convinced myself this idea I need to throw away. So that's on the conviction side. On your second question about internal versus external, which Andrew also alluded to, I think you're going to see an evolution of both use cases. So if we take a step back when we talk about AI, we are really talking about large language models specifically because that is one of the key tools that is helpful in the investment process. In order to train a large language model, there are two steps. There is a pre-training step where you run the model on a large corpus of text across a large amount of compute, and that sort of learns some generalised knowledge. And then there's the second step, which is the tuning step where you suddenly decide this is my specific use case, and then you feed it examples to help understand. So the way that I suspect investment firms are going to do. And the way that we started thinking about this problem is you have to design your systems so that you use whatever model is best. And that may be something that's off the shelf. Nishant: It may be something that's off the shelf that you then fine tune for your specific use case. It may be an open source model that you downloaded off Huggingface like Deep Seek, and then decided to have a bunch of internal evals that help you tune it specifically to say, FOMC statement analysis or unstructured data parsing, whatever be that use case. And then the third, which I think is actually the toughest. And we won't see firms do this. And I have my reasons why is training very large end to end models entirely from scratch themselves? I think the whole purpose of LLMs and fine tuning has been that you can take something that another person has trained on a generalised corpus and then make it smarter for your use case. And that's the real moat. Because the truth is, most investment firms, including the largest firms, do not have the level of compute that the Googles and the Microsoft’s and the OpenAI's Anthropic’ s have in order to do it. And frankly, they are solving different problems. One is like the Gemini model is supposed to be generalised to generate text, video, transcripts, whereas investment folks are really focussed on the investment problem. And so the highest leverage thing is take something that exists, adapt it to your specific use case and improve it. And that's where we're going to see this. Proprietary differences, where firms that have spent a lot of systematic time on how to improve their internal models will diverge in the skills with which they're able to deploy them. Jamie: Perhaps, Andrew, you can comment a little bit on what Nishant just said, what else you're seeing elsewhere in the market in terms of using your own platforms versus external. And then also Andrew, Nishant said something there which was think of these platforms as a bit of a junior analyst. So there'll be people listening, wondering how many jobs will be open at the junior level in years to come. And I wonder, Andrew, if you could maybe just comment on that a bit. Andrew: Sure thing. So I totally agree that the world has been looking at how to onboard AI, if you like. We've got this sort of idea that the firms that are building their tech, their AI stacks, they're putting in place governance rules, policies and so on and so forth to sort of really put it, I guess, really put the best foot forward, have the right tool for the right job. And I think that is a process that's ongoing. I don't think anyone's got it down as just yet. We're seeing a lot of appointments of Chief Data and AI officers now. People are incorporating that kind of discipline into their adoption of AI, making sure that the people within the organisation know which tools should be used for which processes and which tasks. So I think it will be a mix of internal external. I don't think, as Nishant said, I don't think it'll be a massive build from scratch. But the nuance, the uniqueness will come from the mix, the mix of what you've got internally and of course, the data you've got access to which we can talk about. I know we're going to talk about in a little bit. So I think that is the path we're on in terms of using agents to do various tasks. I mean, we are seeing that in real life, part of we've just completed a survey a bit wider perhaps, but certainly within the investment bank and investment management side of the world. Andrew: And we're seeing large organisations put in place teams of AI agents to perform tasks. We're seeing that we're seeing valuation of these agents as if they are employees. Do they get ranked, they get evaluated, they get trained, they get told off and told to go and perform better if they don't meet certain requirements. And ultimately they get terminated if they don't work. So you're seeing sort of a whole sort of corporate structure of these, these models starting to or these agents, I should say, starting to emerge. I think, I think in terms of the light at the end of the tunnel, if you like, or the, the, the silver lining for perhaps junior staff is that that collecting the data that needs to be used to train models and indeed to pull into these models, is still something that finding unique data sets is something that's very much a mix of manual and automated. We see a lot of human in the loop for this kind of stuff, and it's getting back to trust in the data and making sure that people do feel that they're getting the right data. That's being used to train these models. I think that's an imperative that will continue. Jamie: Andrew, just then you mentioned the constant strive for unique data sets. And even when I was running a book ten years ago, I was always so worried about crowded trades. But I can't help feeling that these platforms are going to continue to create these crowded trades. So perhaps, Nishant, you can talk a little about that. I mean, how do you make sure that the prompts that you are using for idea generation and are not similar to the Citadels and the other big companies out there. And again, maybe going a stage further, let's take a black swan type of event like tariffs last year. How does a Versor perform in that kind of environment and how do you protect yourself? Nishant: Sure. Look, that's entirely a fair question. And I think it goes back to investment edge. So as AI becomes more accessible it is accessible. Like I don't want to make it seem like that's a future statement. It is accessible right now. It is pretty easy to get started using it on a day to day basis. The edge doesn't come from having more models or compute or more data. It really comes about how are you using those tools in a meaningful way? So when we think about how we're using these things are advantage really comes from our investment process, which we believe is differentiated, thinks about markets in a very specific way. Our usage of alternative data for each one of the strategies is defined in a very specific, unique way that we don't believe people are doing. So let me talk specifically about our flagship managed futures strategy, which I work on. One of the things that we do there is we have this view that you need to look at equity index futures from two perspectives a top down macro perspective, but also a bottom up stock level perspective. And so we have alternative data that we collect on 24 markets equity markets globally. That's 10,000 stocks. We aggregate those stocks individually and at the country level. And we construct signals doing that. And we believe that's not a common approach. There's a lot of skill and nuance that goes into applying those things and thinking about that problem in general. Nishant: And there is a research focussed idea that results in that. If I give a quick example just from the general AI world, one of the things that has come up previously, but maybe not been discussed in detail, is so if we if we roll back the history a little bit and just look at the timeline of the development of Transformers in 2017, this really important paper comes out called "Attention Is All You Need". That introduces the attention mechanism, which is the heart of the Transformers. It comes out of Google, and a year later, Google actually releases the first transformer model, which is called Bert from Sesame bert the T is the transformer. Yet it was the OpenAI GPT series that ended up winning. Why is that? The reason that happened is one, their focus was very different. The Google models were very focussed on the Google problem of search and understanding. And so what Bert was really good at was understanding text. It was a really good reader of text and understanding. OpenAI took a very different approach where they really focussed on this generative piece, thinking about how it looks and feels to generate text and thinking about what is the likely next thing that somebody is trying to say, and generate stuff that goes according to that. Nishant: And it turns out that that approach was actually the approach that ended up scaling better and led to the GPT advances. So there are two teams. Google is significantly better resourced as significantly more researchers. OpenAI in 2018, I used to go to their offices because they were right, pretty close to where I used to work at Brex. There were about 80, 80 odd people there working on the early GPT versions, and it just turned out that that approach was the right one. So when we think about the investment process and commoditization of AI, and this comes down to what Andrew was saying about the uniqueness of alternative data sets also. It's about thoughtfully thinking about this is what I'm doing, and here's how this is going to lead to differentiated alpha ultimately. Rather than necessarily being concerned that oh everybody's putting in the same inputs. Because look, frankly speaking markets are competitive. If you just do the same thing as everybody else, you're not going to make money. And so a lot of the focus is certainly on that. And so when you reference time periods like black swan events, they're sort of two specific things we think about in that sense. One is just experience. We have structured our strategies across the board to have risk as a core part of their philosophy. The founding partners have navigated multiple cycles, .com bubble, great financial crisis, Brexit. Nishant: And so having a good risk framework that takes a realistic notion that liquidity is going to dry up. A lot of stress scenarios can happen. And designing strategies that are sort of going to survive those periods is important in my particular case, for the flagship managed feature strategy, there's a focus on something that is referenced that we call convexity, which is the ability to do well in up and down markets. And so it's part and parcel of the design of the strategy itself. And we've been trading on this philosophy where we look at cross-sectional differences between equity markets worldwide. So regardless of whether they're all falling or going up we should be able to make money. And this has worked well for us over the past eight years. The strategy has been live not just during Covid, but SVB and so on and so forth. So the goal isn't really to be immune to market shocks. That's unrealistic. The goal is to take your specialised investment process and design it so that it is resilient to different market environments, and where AI and alternative data fit in is helping you design that process well and in a robust way and in a unique way so that you're not competing with others and you're actually able to make money in different markets. Jamie: So Nishant sticking with you. That's really interesting. So we've spoken a bit about research and investment idea generation being automated versus human led and the relationship there. What about execution. And again this touches on risk. To what extent do you have AI programs in place that will without a human being involved, change the percentage makeup of a portfolio, i.e. will trade without a human being involved, because that seems to me like a bigger step. I was thinking earlier, it's a bit like booking an Uber, but there's a human driving it versus actually getting into a Waymo where there's now no human driving it. Like, are we at that stage yet? Nishant: So I think it's a spectrum. Look, Jamie, even without AI, there are a lot of high frequency trading algos in the market right now that are trading autonomously without any human intervention. That's just the truth with where we are when it comes to agentic systems in general, I don't think we're entirely there yet. I think what the advantage of the Agentic approach is, you've imbibed a little bit of intelligence in all the various components. So if we talk about execution in general. Jamie: I was being specific about discretionary trading there, sorry. Nishant: Yes. So in discretionary trading, I don't think we are necessarily there because again, I don't think there is a level of trust in the LLM output. But we are so close that these are the things that could happen. So let's be specific on an example. So you're watching some stock. You have a large position in Apple for whatever reason. And you've built a bunch of agents out there that are looking out for Black Swan events. So you are reading the news feeds, you are looking on Twitter. As soon as somebody says something about Apple that you've designed and think is going to be super negative. You have an alternative data source that is looking at payment volume coming in on a number of iPhones sold. Right now, we're at a place where you might have alerts and the alert goes off, and then Jamie gets called, and then you do something with agenetic systems. I think we're a step further. We've given them all a little bit of intelligence. So not only will they call and say, hey, there's something wrong with this Apple position, they might have a recommendation that says, actually, you need to cut your position by half. I don't think we have reached the stage just yet where we are fully comfortable with them doing that execution, because again, there is this human component that is still driving the investment decisions on the discretionary side. Nishant: That being said, I think we're probably less far away from that than we think. It's a level of comfort. So if you've been following the news, Meta recently bought this open-source Agentic platform called Clawdbot. And what Clawdbot is, is a personal assistant that's in your emails. It's sending emails on your behalf. It's scheduling meetings and doing stuff. And if you start using that, there's a deep discomfort with doing that potentially. But once confidence grows, i.e. you systematically evaluate how its recommendations have behaved over time, I think you'll see people converge to a place where they're more comfortable letting it trade on their behalf. Just like you've seen people get much more comfortable with using Waymo's. Like if you've been in Waymo, the first time you go, it's very scary, but then it quickly gets very boring because you're so used to the consistency with which it's able to do the thing that you want it to do. And so I think we're on that journey. I don't think we've necessarily got there on the discretionary side. Jamie: Yeah. Andrew, moving across to you, I wanted to ask a little bit about regulation and how that may play a part in all of this. We've obviously talked about AI as a good and helpful agent, but I'm sure it can be used out there for if you're long Delta and short American Airlines, you can presumably program something to click on Delta Airlines website as many times as possible to try and like, give the impression that everyone's starting to fly Delta or whatever. But to what extent are regulators sort of trying to get involved, and what do you see any effect that might happen? Andrew: We've had regulators come and speak at our events. They tend to come and try to assure everybody they're not going to be too heavy handed around this given our audience. They want to seem to have a light touch. I think they're at the very much a learning stage at this point. Some of them are running all manner of sandboxes and trials and so on and so forth, where they get people to play with things that they think could help in things like reporting and so on and so forth. But in terms of actually getting more proactive around the use of AI in that investment process. I think we're still waiting to see, there's been a lot of talk of the sort of the EU, AI act and so on and so forth, which are more generic in flavour, if you will. I don't think that's trickled down yet into our world. And when I talk to our regulators, i.e. the financial regulators, we haven't seen them apply that as yet to our activities, if you will. So I think the jury's a bit open really, as we are now. Not quite there yet. Jamie: And Nishant just I guess tangentially on that. Are people like Versor having conversations with regulators? I mean, I'm sure that they're out there talking to people inside the market. And then maybe it's a second follow up. Maybe this is more like an investment question about AI, but we're obviously in some kind of bubble, maybe when it comes to tech and a new piece of transformative technology like AI. What are the signs that a bubble might be bursting? What do we need to look out for? Do you even believe we are in a bubble? Nishant: Certainly. So answering your first question about regulators, we as a SEC and CFTC regulated firm are properly regulated by the authorities as to whether they're coming and speaking to us about regulations, but not to the best of my knowledge, but certainly something that the firm can correct me if I'm wrong. I don't believe so. One of my thoughts on the regulation piece is, I do think there are existing rules in place that govern how algorithms are deployed in financial situations that I think work really well. So the person who deploys the particular model is the one who is ultimately to blame. So during my summer on the SEC multi-quant desk, the Knight Capital algo situation happened. And there it was very clear that the blame did not rely on the algorithm itself. The blame ultimately goes to the people who are deploying it. So I think there is that framework that exists even for AI development. So issues with way more cars are immediately related to Google and like they are the ones that should be held responsible. So that's certainly my thoughts on the regulatory piece with regards to bubble, not bubble. To be entirely fair, I am not the right person for that. I am a systematic quant investor. So for me this is a natural evolution of the process. I really just think of these things as tools that are helping me be a better quant investor. And so there's a little bit of bias in my thinking, because more data, more compute are my lingua franca on a daily basis. So I certainly don't see this necessarily being super bubbly, but I am not the right person. I am not looking at the CapEx spending I am not looking at the mismatch between what compute is required and like the power that is required to generate that compute. I think there is some mismatch between those things. I think the amount of compute that we are trying to build is not supported by the amount of power that we are able to generate. I think that's a key bottleneck that has been pointed out. Jamie: And a question really for both of you, I guess there'll be quite a few people listening who are perhaps trading their own portfolios, and they want to get better at using AI tools to help either come up with ideas or help them monitor them. What sort of advice can you give them of where to be looking to try and find the right platform for them? And I guess the second question is, I never really thought about it until just now, but how careful do people need to be with their prompt writing? Should they actually spend some time actually trying to work out? What they write in those prompts is obviously quite important. So maybe a few words on that. Andrew: I would generically speak on the prompts side. I mean, I hear of people keeping prompt libraries and again, as part of the governance, if you will, of AI deployment is that this is the way we approach this kind of a prompt. This is the way we approach that kind of a prompt to get the best results or to safeguard against perhaps generating something questionable that won't be defensible ultimately. So I think I think there's some governance to be done about that. And I think people are starting to do that. And then my other bit really is about data quality. There is this constant search for new data sets. I can see it. And I think to some degree AI is something that encourages that and makes it feasible. But it is about, ensuring you've got the right processes in place to make sure that, what you get in the end that secret sauce, the nuance, the unique approach is really optimised by making sure that your data quality is good. Ultimately, that would be my two, $0.02, as it were. Nishant: Yeah, I think Andrew's 100% right. Having a prompt library is something that we highly recommend from a quant perspective, because we treat these as models and we want as much of a deterministic output, we do need to store the type of prompts that we write. And over time you sort of learn how certain prompts help the model make certain decisions. The worst thing that can happen as a retail investor trying to apply these things, is one day you put in one prompt and you get recommendation A on the other day you put another prompt, you get recommendation B, and so there are simple things that you can do to make this process more robust. One is any large prompt that you're writing that's an investment thesis. You should record it in your trading journal and it should go there as a specific input in your investment process. The second thing that I highly recommend that people do, and this is something that we spent a lot of time building internally is if you use a particular tool to get some sort of output that is used in the investment process, have another LLM score it. So if I just think of discretionary investors, if you are parsing FOMC statements, say you're a retail investor and you're trying to parse an FOMC statement, you can do that pretty easily off the shelf, maybe say using ChatGPT, have Gemini score it for you and have that consistent framework so that you're having different models, different high calibre models sort of evaluate each other because it keeps the output a lot more honest, and it gives you a better sense. Nishant: One of the agents that we're building internally that I'm very excited about is what I call the always critic agent. So we were talking about conviction, and I really am looking for an agent. That is just every idea that gets proposed to it. It tries to point out issues with the idea, and that's very helpful because it's like having somebody out there who's carefully reviewing it. So there's actually a really great open source tool called Robo Rev that does this continuously for code. But you can also do this for your research process. Have another agent sit in, score the output of a specific agent, poke at holes, use that output as an input loop to improve the structured response. So you can have this back and forth going so that you get a couple of rounds of here's an idea, here's where it sucks, here's an idea, here's how it's better before it even gets to the human to look at and say interesting. Maybe I can action that or not. Jamie: That's some really excellent advice. Thank you both for that. In fact, if people are listening, they should go back and listen to those answers again, some excellent practical advice in there. And so gentlemen, we're slightly running out of time here, but I wanted to, considering we're talking about practical uses. Finish on one question for both of you. If it's not too personal, how are you both using AI in your own lives just to make life outside of work a bit easier? With me and my wife, it typically seems to be about what to do with the children, essentially. And then an argument ensues, which we then ask AI to try and solve. But that aside, what kind of things do you find useful. Andrew: Outside of work, you're saying? I sit there and answer my daughter's homework really with my with my perplexity on my phone. Daddy, what's this mean? I'll be off quietly and you know, run a couple of questions on that. So it's become the new Google for me just on my phone for sure. And then, I had my second project, which I haven't executed on yet, is our back garden here in London, which is a bit messy. So my plan is to take a few pictures and then run these pictures through something or other and get a decent design for the back garden. So I think there's all sorts of things you can do with this. Jamie: I like it. Nishant? Nishant: So I have a nerdier answer because I'm inherently a nerd. And so a lot of my AI usage is actually building lots of silly projects that I can use around the house. So I have this dashboard of the real time number of bikes available downstairs by my apartment, because I bike to the boxing gym every morning around 5:30, and it's really an unpleasant ride. If I don't have one of those electric bikes to take me through. So I have all these, like, little cute things about automating stuff, and I'm actually driving my wife crazy because I'm trying to build her a little app with our own schedules and like, little things. So I am I'm very much a kid in a candy store who's just like any ridiculous idea I can think of. I'm using Claude to, like, spool up an app and start using it. Jamie: Well, you're preaching to the converted. I get obsessed about the time it takes to get from one place to New York City to another. And I know there's a lot of different platforms that give you different answers, and you see which ones best. Anyway, gentlemen, I have taken up too much of your time, but you've both been amazing guests. And I want to thank you for today's podcast. If people want to get in touch with Andrew, the A-Team Group have their own website to see what they're up to. And of course, Versor Investments have their own site too, if you want to get any more answers. But gentlemen, if you want to give a few parting words. Andrew: Just a thank you very much and a great, great conversation. Fascinating topic. I'm sure it will be back next year to see where we're at. Nishant: Yeah, likewise. I've been a long time listener, so it's been really exciting to be finally on this side of the mic. I think we're in a really exciting moment with AI and anybody who's listening to the podcast just go out there, don't be afraid. Just use the tools experiment. There's a lot of fun to be had and a lot of efficiency to be gained. Jamie: Well, I think a year is going to be too long. The pace at which this world is changing. So Andrew, Nishant, thank you very much indeed. And thank you all for listening. Jamie: Thanks once again for listening, everyone. And please, as usual, give us a follow like or subscribe wherever you get your podcasts. Jamie: The information contained in this podcast does not constitute a recommendation from any LSEG entity to the listener. The views expressed in this podcast are not necessarily those of LSEG, and LSEG is not providing any investment financial, economic, legal, accounting or tax advice or recommendations in this podcast. Neither LSEG nor any of its affiliates make any representation or warranty as to the accuracy or completeness of the statements or any information contained in this podcast, and any and all liability, therefore, whether direct or indirect, is expressly disclaimed. For further information, visit the show notes of this podcast or lseg.com.

Other Episodes

Episode 3

May 17, 2024 00:43:47
Episode Cover

Making sense of crises and commodities

We have come to learn that there is always something happening in the world which most likely creates ramifications across commodities. Be it a...

Listen

Episode 1

January 17, 2024 00:38:15
Episode Cover

Understanding the prime broker and hedge fund connection

Prime brokers offer a range of services to hedge funds as well as other financial institutions and this relationship is truly vital as hedge...

Listen

Episode 3

April 09, 2025 00:45:59
Episode Cover

The psychology of trading – Performance data and how to use it to your advantage

Is it skill or luck? Something we may not often consider or evaluate when trading. But, by using performance data, you should be able...

Listen