4x4 Virtual Salon

The fintech infrastructure for the AI economy

Sponsored by
Yann Kudelski
Yann Kudelski
Head of Strategy & Business Development at additiv
Dorina Buehrle
Dorina Buehrle
Co-founder and CRO at Lemony
Susan O’Neill
Susan O’Neill
Founder & CEO at Paygentic
Tom Williams
Tom Williams
CEO at Point
Silhouettes of palm trees against sunset sky with reflections on water

4 Topics

A practical exploration of what it takes to build, power, and deploy AI in financial services today, and how these choices translate into measurable outcomes in real-world applications.

The New Foundations: Modern AI Infrastructure in Finance

A look at the missing building blocks in financial infrastructure that limit AI’s impact across payments and wealth management.

Data: The Lifeblood of AI in Financial Services

A conversation on turning fragmented, real-time, and legacy data into a trusted foundation for AI-driven personalization and insight.

Deploying AI

An examination of the often-overlooked barriers to deploying AI at scale, from workflow integration to explainability and adoption.

Real-World Impact

A look at how AI is changing client experiences, advisor capabilities, and operational processes across the financial value chain.

Register

Abstract blue and green gradient background with flowing liquid shapes

Register to watch

Abstract blue and green gradient background with flowing liquid shapes

Play video recording

Abstract blue and green gradient background with flowing liquid shapes

Full transcript

*Disclaimer: The accuracy of this transcript is not guaranteed. This is not investment advice, and any opinions expressed here are the sole opinions of the individuals, not of the institutions they represent.

Chapters

  1. Welcome & panel setup — 00:00:13
  2. AI infrastructure: models, costs & orchestration — 00:03:39
  3. Data, payments & governance foundations — 00:17:22
  4. From pilots to production: deploying AI in finance — 00:29:21
  5. Who wins next: disruption, incumbents & 2026 outlook — 00:53:02

Ben Robinson: [00:00:13]:
Welcome to this very special edition of our 4x4 virtual salon. This is the first time we've ever had all of our speakers here present in one room. It's always recorded live, but for the first time we've all been together. It's also special because we have an amazing panel of speakers today. Just before I introduce you all, I just want to say thank you to our sponsors, Stableton. Stableton is a Swiss platform that provides wealth managers and asset managers with access to liquid or semi-liquid, low cost, systematic index-based products, investing in some of the largest private companies like SpaceX, OpenAI, Anthropic, and so on. Thank you to our friends at Stableton.

[00:01:05]:
Today we're going to be discussing AI infrastructure for financial services, and we have four great speakers here. The reason we call it 4x4 is because we have four speakers, we talk about four topics. We take four audience questions. I have our small audience here, and Geneva has some questions prepped. Please don't be shy. Then we also take four polls. Because this is being recorded live in the room, we've done the polls in advance, so I have the answers here. I’ll try to work those into the discussion. I'm going to start with you, Doina, because I'm nervous about saying your surname correctly. Dorina Buehrle, she's the CEO of Lemony. Lemony is an AI infrastructure platform, and one of the things you've developed is a tool for routing prompts to the right models, to save on latency and costs and so on. We'll hear more about that in a second. Here we have Yann Kudelski, who is the Chief Strategy Officer at Additiv. Additiv does so many things. It's so difficult to summarize what Additiv does in a sentence, but Additiv is an orchestration platform for financial services that helps companies to introduce quickly new financial products and also now helps financial institutions to automate processes using AI. Next, we have Tom Williams, who is from Point Group. He's the CEO of Point group or Point, I'm not sure you need the group, right? Point provides a data intelligence platform to wealth managers, asset managers, that helps them to organize their data, their client data, their market data, and their investment data. Last but not least, we have Susan O'Neill, who's the CEO of Paygentic. Paygentic is a billing and payment infrastructure for AI native companies that helps them to monetize any metric based on usage. I think it's a super exciting company. Thanks. You've traveled here all the way from Dublin, so thank you for that. We're going to get started with topic one, which is on the new foundation's modern infrastructure in finance. I'm going to start with you, Susan. From your vantage point over there in Dublin at Paygentic, what are the biggest gaps in today's payment infrastructure that limit the full potential of AI?

Susan O’Neill: [00:03:39]:
From our perspective at Paygentic, one of the things that limits the full potential of AI is giving AI agents the ability to access and pay for data and services they need. Those transactions make no sense from an economics point of view. They're often high volume, low value transactions, which traditional payment infrastructure is simply not set up to support. If you think of traditional payment processors, most of them have fixed minimum transaction fees. Those transaction fees can often be higher than the value of an agentic transaction itself. That's one of the reasons we built Paygentic from the ground up, to support agentic transactions.

Ben Robinson: [00:04:19]:
Fantastic. Dorina, I feel like your value proposition is similar, in that you are also having AI founders to cope with high variable costs and so on. Can I ask you to talk about the opportunities you see in this whole space, this emerging space of dynamic routing?

Dorina Buehrle: [00:04:39]:
Objectively, we are actually all doing AI wrong. It just evolved very naturally that we took a new technology with Gen AI and built technologies and a product out of it. Of course the priority was to make it work. However, how did we make it work? We just routed everything to one big model, because that's the one that was there. That's the one that worked and was sufficient, but also powerful enough to solve everything. But now it's somehow the time to take a step back and say, what's actually useful and efficient to really make it work on a sustainable basis? That's what we do with Cascade Flow. We actually take the queries and the tool calls during generation, so you do not have to understand or know upfront which type or cluster or characteristic comes in, and rooted first to smaller domain specific or just powerful small models, see if it's enough to be answered. If not, then it's dynamically cascaded to bigger models, and thereby can drastically reduce the costs that are used with those big models.

Ben Robinson: [00:06:02]:
Yann, at Additiv, how does that work? Do you tend to just work with one large language model or is it up to the client which models they work with? And what's the secret in your world when it comes to, in fact, we're going to work in our first poll answer here, which is, do you think AI agents will soon be able to complete financial transactions autonomously? 74% of our audience says yes. Tell us also your secret for actually putting agents into production at Additiv, which I think you're doing quite successfully across insurance use cases and other use cases.

Yann Kudelski: [00:06:42]:
A couple of questions there, but I’ll try to dissect it in there. We work in multiple models, space models, but we choose the model that's accurate for the use case that we're trying to solve. I think down the road a cascading approach could definitely make sense to optimize costs.

Ben Robinson: [00:07:03]:
Were you aware of Cascade Flow?

Dorina Buehrle: [00:07:05]:
For five minutes now.

Yann Kudelski: [00:07:06]:
Basically, we use the model that's most appropriate for the case that we're trying to solve. But maybe I have to go a step back to what we're doing, what problem we're solving, even pre-Gen AI, or agentic AI, let's put it that way, we work with institutions, incumbents that have a lot of fragmented systems, core systems that do their job, that they're doing really well, but it was really fragmented. They had siloed integrations left and right. We sit on top of those core systems, we abstract process and business logic onto our system to then orchestrate workflows end to end. That enables our clients to offer better services, a better client-advisor experience, innovate and automate as well, and be more efficient. Agentic AI or Gen AI or other AI tools help a lot in supercharging that, and make it even more engaging, more efficient. As such, we're embedding AI wherever, whenever it makes sense on our platform. We’re not offering an AI only platform, but it's a combination of AI with a domain specific layer that we have with business rules that we can combine. We can play both ends. I think that combination we believe brings the efficiency gain, the experience gain.

Ben Robinson: [00:08:37]:
I know there's a section later which is world impact. Not to jump ahead, but can you give us a specific example of one process in which you're able to introduce agents or automate end to end?

Yann Kudelski: [00:08:51]:
Yeah, they're different flows. Let's take a credit process for now. Credit decisioning is a different level and you also have regulatory guardrails around what you can do and whatnot. But on credit, very simple, you have a lot of structured and unstructured data out of it. You can make a lot of sense and pre-work and pre-populate a lot of data, a lot of calculations, logic, beforehand for the advisor to work with. But that's just one thing. That's using the same approach that has been done before, ingesting AI to make it a bit smoother, better, more efficient. But you can also think about turning the whole process upside down. In the digital age we always talk, if you have an analog paper-based process, it makes no sense to digitize it one to one. I think the same applies in the world of AI. We had a digital process; it probably makes no sense to take it and just ingest AI. You can rethink the whole process and how it works, and that's what we're working on.

Ben Robinson: [00:10:02]:
Perfect. What I love about this group gathered here is rules of representing slightly different points in the AI stack, and also slightly different parts of financial services. Tom, you're mostly focused on wealth managers and you're also helping them to organize their data. I don't mean this in a pejorative sense, but you're kind of at the base of their stack. When you think about wealth managers specifically, what are the gaps that you are seeing?

Tom Williams: [00:10:33]:
What I think what is interesting is you talk about, there is a fundamental issue with a lot of the target operating models. We are an operating model for both wealth and asset managers. You talked about silos. The issue, if you lift a lid on any wealth or asset manager, it doesn't really matter what size it is, single family office, multi-family office, private bank institution, you've probably got a number of core systems that have been in place for 2, 3, 5, 10 years. Around them bring it all together, a whole load of spreadsheets, a whole load of people, all beavering away doing the manual joining, the orchestration that brings that out. The problem that we're solving is how you join that up and how you orchestrate that at the data level. Now, we were solving that before AI came along, but AI has just shed a light on those data silos. When it comes down to a wealth or asset manager, how can they then support AI to be applied to their business model, particularly the enterprise place? We are just solving on having a aggregated reference and accurate data set on which AI can be applied. Then you can do all the really clever stuff on top of it. But we are just focusing on that. I think that is the problem that is probably going to prevent the largest ROI to be realized on an AI investment.

Ben Robinson: [00:12:04]:
Maybe I'm going to ask you all a question, you can all opine on this. You alluded to this at the start, when Gen AI first came into our awareness, which was the end of 2022, it was like that iPhone moment, which was like, this is incredible. We all got super excited and I feel like we've almost been a bit through the Gartner magic, the hypercycle. We got super excited and then maybe now we're realizing we need new payments infrastructure, we need to organize our data, we need to orchestrate across multiple different systems. I suppose, if I ask the question slightly more concretely, do you feel like we're getting to a point now where people are investing in things they should have invested in years ago in order to get ready for this AI wave? Tom, come on, you've got to take that. Are you busier than you were 12 months ago? I don’t know how old your company is, but 24 months ago, because people are realizing the need to address what you do.

Tom Williams: [00:13:04]:
I think if you go back 18 months, a client would come to us and go, look, our reporting is terrible, or our analytical ability is limited over multi-asset class, or I’m sweating my data for MI and BI. That is all true, but they will always start with the symptom. Then you'd leave them back and you go, well, the root cause of this is because there is a lack of an infrastructure that allows you to swap your data in the hole, bring together multiple asset classes, multiple clients, and enrich it with market data, enrich it with outside data, whatever. Bring it into something, and you can then start asking questions now. This may be just where we are in our growth cycle this year, people are coming to us and saying, we know we have to sort our data out in order to be able to then apply it effectively, particularly at the business wide level. I think within that, the larger organizations have got there quicker because they're thinking about it in a bit more of a strategic fashion than smaller businesses, which just through nature of the internal capability path that they've got, are still focused on we've got a pain here, how do I aggregate multiple sources and data? How do I do my multi-asset class, multi-client analysis, whatever it may be. The route to that is different depending on the client size, but the ultimate understanding of, in order to build an effective wealth or asset manager, you have to have a coherent, accurate and reconciled dataset from which to be able to drive intelligence led decision making and/or integrate your tech stack and/or apply AI to it. We're definitely seeing an upturn in that.

Dorina Buehrle: [00:14:55]:
I also think that it's a very natural evolution, because in the beginning nobody really had an understanding. It was beyond our imagination what this could all impact, what are the solutions. In the beginning everybody just thought about just simple customer support. We had no idea about agents yet and all of these things. I think, yes, of course it would have been smarter and more stable perhaps to first build the base layers and infrastructure and everything.

Tom Williams: [00:15:28]:
It's also the CEO who is asking, how do I? I've just seen this demo with this great AI company, I want some of that. Then the internal teams are having to retrofit the architecture around it in order to provide the data to that particular agent. You always go for the shiny ball to buy first and that's Christmas as well. Yeah, I think that still pertains to a lot of buying decisions.

Dorina Buehrle: [00:15:55]:
It's also different requirements. I think many companies just thought, okay, it's going to be similar to what we have, but there are just specific requirements that companies were not aware of. What we see in that regard also is that companies are very interested that do have a successful POC for example, but then enterprise do not translate it or do not roll it out completely because only then do they realize, oh wow, this is not also shiny as we thought it would be. It's actually quite costly or we have to put a lot of work.

Tom Williams: [00:16:30]:
Or it works on a particular silo, but then you try to extend it to other silos and then suddenly it falls down because it doesn't work across silos.

Susan O’Neill: [00:16:38]:
Absolutely. I think to your point around it being costly or making buying decisions, we've seen some really interesting monetization models, pricing models come out for agents. Companies are really putting their money where their mouth is now because of this hype cycle then that you were talking about. They're now saying, my product is so good, you only have to pay on an outcome basis or a success basis. To your point around customer service agents, for example, we're seeing companies like Intercom, they only charge for Fin if there's a successful resolution, the customer service agent. We are seeing that now replicated across a lot of AI companies. It's a super interesting way of really delivering value for your customers. Definitely time to move away from the old per-seat and subscription model.

Ben Robinson: [00:17:22]:
Yeah. I feel like the subscription model was perfect for the last era, for the SaaS era. Now this is a new era and we need a different type of payment infrastructure. We're going to move on to topic two just to stay on time. Topic two is all around data. Yann, going to come to you first. You talked about how you orchestrate complex processes, and I suppose necessarily you need to pull data from multiple systems. What's the secret there in terms of consolidating data, harmonizing data? How do you do that across your customer base?

Yann Kudelski: [00:18:01]:
Yeah, I think we talked about before, Tom mentioned as well, data aggregation, data harmonization. But it's both. It's two sides. It's one harmonizing data and having some objects on your side that you harmonized, unified, et cetera, that you can work with. But it's also harmonizing access to data, that you not necessarily need to store on your end or have big data lake, but you need to coherently.

Ben Robinson: [00:18:28]:
That's a good idea. Maybe we'll come back to that.

Yann Kudelski: [00:18:30]:
We’ll come back to that. Then it's on our end. It's more than just aggregating the data and reporting and reading data. It is bi-directional. You need to be able to write as well and you need to bring meaning to the data context to the data. Otherwise, if it's just a data dump, then there's not much you can do with it. That's the secret sauce. I think we talked before the podcast, the plumbing itself is not the most value-add. It’s what you do with it once you have the plumbing. The plumbing itself, we talked about custodial feeds for, I don't know, 20, 30 custodians. That's becoming more and more a commodity. But it's what you make out of it, the context and meaning that you give to it and how you can use it.

Ben Robinson: [00:19:18]:
Plumbing is boring, but the plumbing in every technologies people do, the plumbing tend to make the most money. Yeah, plumbing is boring but normally quite profitable.

Tom Williams: [00:19:28]:
I can only speak from the wealth and asset management side, but you're absolutely right. We talk about something called a data value chain or taking from underlying books of record, which could be external banks, it could be external custodians, it could be internal systems. Then being able to do more with that data as you take it down that value chain, so you aggregate it, the plumbing bit, you've got to get hold of it, you've got to bring it in. You've then got to turn it into something valuable. In our case, that's around an investment book of record that covers all asset classes, that you can then start organizing next to the data architecture. You want to be able to then analyze it, visualize it, or put AI onto it. I suppose that that data value chain, it becomes more and more relevant to more users the further down that chain you get. The plumbing is super important because it's the commodity on turning on and off a tap. You have to have that, but almost the sources are sort of infinite now. Therefore that there's going to be different specialist providers that are good at certain bits that aren't good at others. You're going to have to work with each of those, I guess.

Yann Kudelski: [00:20:31]:
For us, I think one key point is, yes, it's about aggregating the plumbing, but it's that we can bridge also the gap. You have some core systems, even if they have the data, if the logic is sitting in the core system and you can't extract their logic or you can't access it, then there's not much you can do with the data. That's why we do the plumbing, we extract the data, but we also extract business logic and process logic on top that we then can work with it because ultimately you need to work with it.

Ben Robinson: [00:21:01]:
I'm going to ask you a question about data or maybe model governance and data governance. We asked the question here, what's your primary governance concern with AI and FinTech? People said that number one by margin was data privacy. My question to you is, when you work with companies, especially larger companies, how do they feel about using public models versus private models? I think you also enable people to route queries and prompts to even models that they've trained themselves on premise, right? Tell us a bit about how people feel about using different types of models and data leaking out there into or training other people's models.

Dorina Buehrle: [00:21:46]:
That's actually where we came from. We offered solutions for on-prem applications, but with a variety of models. People did not really understand or it is hard to understand which model to use for what. That's how Cascade Flow was ultimately born, to take over that boring job. I think it's the time to take that step back and say, okay, now we need to clean up and take care of our technical depth and provide the proper infrastructure. I also totally agree, not everybody needs to do that themselves, but there have always been companies that provide the base layer and then everybody else builds on top of that. Yes, I totally agree that governance is a huge topic. I do not think that it's going to be a decision for every single enterprise, to are we going to go cloud or not. Ultimately also because that's impossible. Now we are still in a situation where you say, I'm taking this service and I'm taking this service and exactly I'm taking this service because it does only use models that I'm fine with. But rather sooner than later, that model will have other providers and other services attached to it. It's becoming totally transparent. The governance to me I think will rather move towards a hybrid structure where you say everything that is in any way regulated, confidential, needs to stay secure on-prem, covered by local models. For example, edge applications will become very relevant, while complex tasks, multi reasoning processes could still run through the cloud. That can all be cascading. It's just a matter of what is your main priority. Of course cost is an attractive first step, but ultimately things like latency or confidentiality or governance or whatever, certain industry criteria, it's all the same method ultimately in several sides of the same coin, so to say.

Ben Robinson: [00:24:02]:
Susan, payment data. There are payments throughout tons and tons of data. What are you seeing there in terms of the difficulties that people are having capturing that data, reporting that data, analyzing that data? What's happening in payments as regards to data?

Susan O’Neill: [00:24:20]:
Well, I suppose there's a few things. To your point, usability. All the information is coming from different places. The data needs to be structured in a way that it's usable. The next thing of course is speed. If you want to use that data in order to make a real time decision, if that information is coming from different processors, different banks, different card networks, they'll all have different latency, different speeds getting that information. If you're using the information for something like dynamic pricing or fraud detection or something like that, speed is really important. Then compliance. Payment data is super sensitive, it's not like you can just upload it into a model and off you go. Yeah, I would say usability, speed and compliance are the big things to be concerned about.

Ben Robinson: [00:25:09]:
Then if agents do more transacting on our behalf, there's all sorts of second. You can enable that, right?

Yann Kudelski: [00:25:18]:
Absolutely.

Ben Robinson: [00:25:19]:
You can enable those micro transactions in real times, but I guess there's a whole bunch of other stuff that needs to happen as well. We need to be able to verify that bonafide agent acting on your behalf and so on. How's the other parts of the ecosystem forming around real agents and payments?

Susan O’Neill: [00:25:37]:
There are literally whole parts of the ecosystem performing around authentication and that side of it. It's also not just authentication, but also the guardrails that need to be put in place. You need budgets and limits. You need to make sure that an agent doesn't run out of funds at a critical moment. There's lots of different pieces to this puzzle and lots of different teams working on it.

Ben Robinson: [00:25:58]:
Sorry to ask you such a specific question, but where do you start? Do you provide the wallets with the limits and funds?

Susan O’Neill: [00:26:05]:
Exactly. That's our core value proposition. That, and of course billing and pricing would be the other pillars around what we're doing.

Ben Robinson: [00:26:16]:
Fantastic. In the interest of time, we're going to move on to section three, unless there any questions from our audience here. Jerome, you're not normally shy, any questions for you? Maybe at the end? Do you have any question?

Jerome: [00:26:31]:
Yeah, something that I heard around when you were referring to AI systems are very good at creating insights as well, but then banks are maybe terribly bad, private banking, wealth management, at creating actions for the end user, whether it's CRM, the clients. Between a business workflow to create an action and the AI infrastructure that can create insights, what's that missing layer in between? Is there something missing to really bring that time to market with relevance suitability for the entity?

Ben Robinson: [00:27:11]:
The question I think is about, is there a missing piece of infrastructure between gaining insights into customers and then triggering actions? I'm looking at you because I think it was a question specifically about wealth management, but we can all opine.

Tom Williams: [00:27:25]:
Yeah, I suppose in our specific sector, and the reason that the design thesis for the point platform came from was a lack of data infrastructure that orchestrated the core commodity that allows an investment manager to make a decision. Mainly investment data, but also enriched with client data and market data. If you have those three components, you have the ingredients to allow you to go, I know this, so what it means for this investment decision or responding to this client in this way. We focus very much there, because there's a lot of tools out there in the market that then can take that data and put it into a workflow, which is the orchestration around buying and selling an asset or through an order management system or a trade execution venue. Or on the client relationship management tool to enable you to do X and Y, and check the mandate and have a meeting with the client or whatever it may be. I think the interesting thing with when you start looking at AI across an entire business. You're then bringing in AI orchestration as another element within that. We come from a target operating model design perspective where we're talking about people, process, tech and data, and how all of those particularly people and process interact with the tools that they've got to be able to create value. I think that what you then end up having is agents that start doing what people have done, then they have to have the ability to orchestrate, which becomes, you mentioned context, I think earlier context becomes key on this. I think that's where, in my humble opinion, we are not going to find one platform that does everything. We are going to end up having various different tools that come in and start working together on an orchestrated data process, people platform, AI agent level.

Ben Robinson: [00:29:21]:
Fantastic. Thank you for the question. Moving on to topic three, deploying AI. Yann, coming to you too because you've been working with all sorts of companies, large insurers, large asset managers. One of the difficulties when in your world of automating complex process complex workflows and so on, it's not like you're working with a discreet team. It's like these sit across the enterprise; they sit across teams. How do you even make this happen in practice? How do you get alignment across all those different teams, alignment at the most senior level for this, for you to even start working on these problems? What's your secret?

Yann Kudelski: [00:30:05]:
It's a good question. The secret is, or not the secret, but independent of AI or not, I think our platform is quite powerful. But how we start in is always with a bespoke business problem and how we're going to solve it. I think start there from the business problem, what's objective, how can we solve it? We go from there and then we expand. But yes, ultimately you need buy-in from the business side, from the tech side, you need everyone on the same page. Especially if you start introducing AI agents, there's a lot of questions about explainability, auditability, safeguards, what have you. That's why we have still a human in the loop design principle, practice. The AI agents is basically serving up the recommendation, but you always have a human who actually then is responsible for the decision at this point in time.

Ben Robinson: [00:31:04]:
Well, I guess this question could be for all of you, but maybe starting with Dorina, what do you think is the thing that people most underestimate when it comes to trying to put Gen AI into production?

Dorina Buehrle: [00:31:19]:
I think the buy-in is a very good topic. It's actually something that we also approached in a way to try to bring everyone on board immediately with an open-source approach, of saying, the developers can immediately have insights and really have everything laid out to gain that understanding of how things work to trial it, to just dive in. While on the other side, on the management side, the story of saving 30 to 80% on the AI bill is relatively easy. The moment the CFO comes in, while the scale might be super exciting immediately, the CFO oftentimes doesn't love it as much seeing the ratio and the ROI on the products.

Ben Robinson: [00:32:10]:
In both of your cases, do you find that people have sort of hit some wall when they come to you or do they come to you early in that? Are they coming to you because they're running this at relative scale and they're just losing money almost on every prompt? Same to you, are they coming to you because they already have a problem they need to solve or are they coming to you because they think it could give them a competitive advantage to use outcome or success-based billing? Maybe you start.

Susan O’Neill: [00:32:37]:
Okay. From our perspective, we see people coming when their solution can't be implemented anywhere else. For example, a company has an AI agent, SDR, they charge on a success basis, so they charge a percentage of revenue generated in year one. That functionality is not available out of the box, but it's something that we support natively with Paygentic. We definitely see people coming with creative pricing models, and people are now seeing pricing as a lever for success as opposed to an afterthought. The next time we see people coming is when they're really trying to get to grips with economics. Early-stage companies are just focused on revenue, but it's when they're starting to think about those metrics and drill down on them that pricing and how they charge for the product becomes important. To your point, the CFO is coming in looking at what products are being used, and it's very difficult to monetize AI on a purely subscription basis. Because if you have a super user, they become wildly unprofitable; and if you have someone who is a very light user, they'll churn. The CFO will simply say, this is not being used, let's cut that subscription. They're the points we start seeing with people.

Dorina Buehrle: [00:33:57]:
We've seen that quite a bit, that the free solutions have had to be dropped because it's absolutely not profitable. For us, it's similar. They either come right in the beginning and for that particularly open source when a solution wouldn't be marketable. If they already think about unit economics in the beginning, they see that can never fly. I have a really great agent, but that's just going to send so many frequent automated tool calls, it's going to be massively expensive and nobody's going to pay for that. In the old setting, so to say, not with us. The other are when the companies are scaled, and when typically investors, CFOs or the more financial sustainability perspectives are coming into play. While it is on the market, it's super successful, it was all just priority to scale, to grow, then at some point they realize, this is not a lasting success and we cannot just continue. Then the cleanups is starting. There is so much potential that's just left behind simply because of that static routing to a big totally oversized model. It's fairly easy to redo the base layer and cut the costs and make more products coming to market, make more products profitable and sustainable, and then ultimately also more attractively priced. If you do not have that huge cost load on your side, you can also become way more competitive on the pricing side.

Ben Robinson: [00:35:45]:
Tom, I'm going to ask you what you think institutions have most underestimated when it comes to implementing Gen AI. I know what you're going to say. For those of you who don't follow Tom on LinkedIn, he's doing a video almost every day. I'm exaggerating, but he does quite often.

Tom Williams: [00:36:03]:
I have other things in my life, but yeah.

Ben Robinson: [00:36:04]:
But it's a little bit his hobby horse, talking about how people have underestimated having unified consolidated data. Talk to us about that if you want to, but also talk to us about explainability of data. Because in your world you can't have an agent talk to any customer about any product. It all needs to be traceable. Talk to us about that.

Tom Williams: [00:36:31]:
On the auditability side, the reason why I advised on this investment or the reason that I said this to this client, is almost independent of AI. “Why did you invest me into this at this particular point?” You should be as the business be able to apply to that client, “Because this is what we knew at that point in time, and these were the reasons that we made that.” Or, we need to be able to report on that data for a regulatory filing or to answer a client question. That is just a challenge that is based upon having control of your data. You can ask a question of it through either an AI means or any other conventional tool. AI, again, it just puts the need to have that explainability on steroids if you like, because regulation is catching up. Before long it will be unacceptable to say, “Why did you do that?” “Oh, because AI told me to.” It just cannot happen. Therefore understanding the data lineage and the reason why a decision was made, whatever function of the business, is going to become absolutely key. Then also the AI itself does not become a black box. That you still retain understanding of what you need to be able to do internally. AI is great at a number of different things; it is not a deterministic calculator for working out performance. You have to do your maths before the AI gets hold of your numbers, and you've got to be able to stand by them. The adoption of AI has still got, and now I'm talking about across the business at that target operating model sense as opposed to within a tool, there's an evolution of thinking, which I still think needs to be applied, which is where we are focusing. If they can get that data bit right, it doesn't solve everything, but it allows them at least to have that auditability, that transparency and the confidence that data's accurate before they then start doing actions on top of it.

Yann Kudelski: [00:38:41]:
I fully agree. You need to combine the deterministic business logic with the AI. It's not either/or, you need both. Performance as you said, is calculated.

Ben Robinson: [00:38:51]:
Just because we need to move on, to stand on top, we need to move to real world impact, but you can carry on talk about that topic. I just want to signal that we're in now section four, real world impact. How do you do that? Maybe your cascading does this too, but how do you route things to a deterministic model for certain things, generative model for other things? That's all built into the workflows. You have a human in the loop as I understand. How does this all work?

Yann Kudelski: [00:39:15]:
Right now it's built into the workflow, so you decide what to use. But there are also multiple agents and there's the concept of a coach, like an agent orchestrating a team, a team of agents. But right now it's built into the workflow.

Ben Robinson: [00:39:33]:
Player manager, agent.

Yann Kudelski: [00:39:34]:
Player manager, agent. That's a good analogy. I need to that copyright this one first. It's built in the workflow. There's still a human in the loop approach right now. The main use cases we see it for is really advisor efficiency, internal workflows. It's not yet, at least in our client setup, let loose freely to clients directly, so to speak, as what you said, Tom. Even if AI comes up with a recommendation, there's still the human oversight over it. It has also guardrails. Wealth management, is this instrument suitable? Is it appropriate for that client? Does it fit my investment policy? Is it within the mandate? There's still things that will need to be checked or are currently checked.

Tom Williams: [00:40:21]:
I suppose what I find fascinating with on that is that, we've got clients at the moment that the AI tools, if you like, and Yann said this, they are just tools, we use Copilot because we're a Microsoft stack, and we've gone with a proof of concept with an AI agent company that's looking at compliance. You can see these exactly happened pre-AI operating models built out of different AI components. The challenge is going to be how you bring those together.

Yann Kudelski: [00:40:52]:
It has to come together ultimately. Otherwise you still have marginal silos, marginal benefit. If you can't bring it together, then it's not the step change that you're hoping for.

Dorina Buehrle: [00:41:04]:
That's just the complexity that's going to be huge. Because every agent is, again, several agents and they are talking. Then you have the manager agent, and what did you call it? The player manager, and the supervisor layer and so on.

Tom Williams: [00:41:19]:
You get to transfer, which is where the license falls out and you go, I don't want get clawed anymore, I'll bring in Microsoft.

Dorina Buehrle: [00:41:26]:
But that's the complexity. Somebody has to be there and keep the oversight. That's exactly how you said where we come in, you do not have to have the oversight anymore. You don't have to define upfront, this part is going to this model and this part is going to this model. Because we don't even know, or most of the time we don't even know what's coming in or what kind of query, what kind of tool call. Is it a super easy thing? Well then why do we route this to the four oh model? That's $3 per million token and just the bills are exploding. When it's a super easy task it can be answered when small model. I think that's where we are heading to, that the agents have managed to become efficient in themselves, but then we need a management layer that keeps the oversight and that keeps it visible. What is used when, to what degree, to keep that transparency and to enable governance ultimately.

Ben Robinson: [00:42:26]:
Where we humans are needed? Just maybe to switch on the agents in the morning.

Dorina Buehrle: [00:42:34]:
I think humans will always play a role in terms of governance, but we need the tools for the humans. I don't think the agents are the problem.

Tom Williams: [00:42:46]:
It's a good question, because this brings into shadow AI use. You have those tools, AI tools that are provided by the business to their people. Then the fact that Barry down in compliance has also got his own Copilot or his own ChatGPT, and decides to do something off on the one side, because he can shortcut it, which takes him completely out of the governance loop. Then suddenly Barry's put a whole load of GDPR compliant data into clawed and it's gone to the United States, and suddenly we have an issue. That's when the people in the process become…

Dorina Buehrle: [00:43:19]:
Very intertwined.

Tom Williams: [00:43:21]:
You've got to bring them together, right?

Dorina Buehrle: [00:43:22]:
Yeah, absolutely. But with cascading, for example, you could also say, everything that is in some way confidential or IP protected only, is rooted than two local models.

Ben Robinson: [00:43:33]:
Who's your hypothetical person? Does that mean Barry in compliance couldn't send something?

Dorina Buehrle: [00:43:40]:
Exactly. Barry cannot make a mistake anymore, because that's exactly what happens also with shadow AI, with overregulation, so to say, when management comes in and say, don't put anything into those chat bots or wherever, or don't put anything confidential. People get anxious or do not want to use it anymore. That then prevents the usage ultimately, which you also do not want to cause. I just don't think that we can expect from every user to have that understanding and to have that expertise. That's exactly what we ran into, which ultimately we created Cascade Flow to solve that problem that we had, that we expected from the users to decide which model to use, when, where, for what. But that's just way too complex and not every user is so much into the technology. We take that off.

Ben Robinson: [00:44:37]:
Do you feel like for more adoption of AI, we need to just make it easier?

Dorina Buehrle: [00:44:41]:
Yes.

Tom Williams: [00:44:43]:
We're having a discussion about this on a use case at the moment. Within a wealth manager, a relationship manager, being able to create a brief about a particular client before they go into a meeting, and having this big internal debate about whether you leave that to be self-prompted. I'm going to see Mr. Smith, what was the performance of Mr. Smith's portfolio, and then what happened to the last meeting? Before you know it, the prompt has gone off in a completely unwielding ungovernable fashion, but also the output is different from the next time the same wealth manager goes and meet Mr. Jones. It's a lot of people. Actually, is it better to dumb it down, and say, it is pre-prompt. It AI generated, it can bring together a whole disparate governed data sets, but within something that limits hallucinations and stops what you've been talking about. It's interesting, isn't it?

Ben Robinson: [00:45:41]:
Remember when Spotify first came out and you could listen to any song that's ever been recorded. Then it was really getting more adoption when they started using playlists. I feel like we almost need the playlist phase of AI.

Dorina Buehrle: [00:45:55]:
We were continuously running into these things. We had the phase when I was a teen and then I was not allowed to send too many SMS because it was way too expensive. Then there were just automated programs that would limit that. I think all of these complexities are just returning. Ultimately what happened always was there were just solutions created for the users. The mistakes do not happen anymore, because we want that adoption, right?

Ben Robinson: [00:46:22]:
Yeah. Is that what Additiv clients want?

Yann Kudelski: [00:46:26]:
It's a great example, the meeting prep. We had the same debate and we ended up having prompts for set, and not have free prompting for an advisor before the meeting as the current solution.

Tom Williams: [00:46:40]:
Current, I'm taking that back to base.

Yann Kudelski: [00:46:43]:
But it's a debate, right? It's a continuum as well. Things might change down the road, but currently we had the same debate. It's an interesting debate.

Ben Robinson: [00:46:56]:
Susan, next time we're going to use an iPad or something cooler than this, I printed it off, but in our poll, we asked which FinTech vertical will AI disrupt first. The winner was payments. Do you agree with that? What are the coolest use cases you see? Talk about a couple of them.

Susan O’Neill: [00:47:14]:
For us it makes total sense that payments will be disrupted first also. There has been not a huge amount of innovation on the payment side. Also everything in the existing payment rails are built for a human in the loop, so a human actually hitting ‘pay now’. I think it's one of the areas in FinTech which is most ripe for disruption. Because every single action that we've been describing here, every time we send an AI agent off to do something, each of those actions has a cost, whether people realize it or not. Whether it's compute, data, an API call, whatever it is, all those things have a cost. I think payments is absolutely ripe for innovation here because we need to put the pieces of those things together so that now payments can move at machine speed. Like I say, high volume, low volume, whatever we need the payment infrastructure now has to be built to support that. It's a really exciting space to be in. There's so much innovation happening all the time.

Ben Robinson: [00:48:18]:
If you're an incumbent payment provider, you’ve probably thought that you'd seen a lot of disruption already. I think payments is really the segment that's seen most disruption from FinTechs. Something like FinTechs has gained 7% market share or something, and now you've got another way. You probably think you'd never get a break, but compliance was next. I don’t know why I'm looking at you. I associate you with model governance or whatever. Do you agree with that? I don’t know how you disrupt compliance, but maybe that could be much more automated.

Dorina Buehrle: [00:48:48]
I don't know who exactly answered, but I guess it might also be possible to be driven by where's the biggest pain. I assume people would also wish for that to be disrupted, because it's just painful at the moment. But I do agree, by the way, with payments. One of the huge misunderstandings that we see in the industry or among startups is we are so used to SaaS being almost free, that everybody is looking for the same KPIs with forgetting that AI is not free at all. While it feels as easy to scale, we have to consider a few more conservative metrics to be sure that it's going to be sustainable.

Ben Robinson: [00:49:41]:
Third on the list was wealth management, which I would agree. The problem we've always faced in wealth management is we've never really been able to ‘democratize’ it, to bring a bigger base of customers into professional wealth management. It feels like we should be able to, right? Because it feels like, as you said, either where the pain is greatest or where the costs are greatest, because Gen Ai with use cases, it can provide bespoke advice to customers. Not that we've necessarily want to do yet, or without the right guide advice.

Tom Williams: [00:50:16]:
Maybe provide advice but not actually do anything with it. Definitely without a humor in the loop. It is interesting. Our observation within the clients that we work with, you are seeing AI being applied to their business models, either from a portfolio management system or a CRM or whatever it may be, applying an AI layer to an existing tool. Or the business itself working with a third-party AI company to solve a particular pain. Going back to your point. I think we're seeing that. Compliance is one of the use cases that's being first tackled because AI is very good at spotting patterns. We’re in a set of data going, hang about, this is an outlier, we should flag that. You can support the human in the loop to be able to do that. That's probably where wealth management and compliance, so there is a crossover. Then on the other side is things like client onboarding, and then how does one advisor be able to serve a wider range of people at a lower cost, which is then to your point on democratization. The most difficult bit is around, at what point can AI be deployed effectively to that core investment data and then start making decisions or advise decisions on investments? There's very little wiggle room for getting that right or wrong.

Ben Robinson: [00:51:38]:
The other one that's on the list is lending. I feel again, like SME lending, perfect use case, because so much of the cost in SME lending is onboarding SMEs. Constantly understanding whether they breach their covenants, reporting it, it feels like can really take the cost out. Actually the lending decision doesn't require that much. It obviously requires capital to lend the money, but the lending decision is not necessarily a very time intensive or costly exercise. It's like the onboarding and the report, all that stuff it seems to be where the costs are.

Yann Kudelski: [00:52:10]:
It's preparing the case, and I think their AI can for sure help. But I think initially it's a support, be it for the credit officer, be it for the advisor out there creating or establishing a case together with the client. For very simple cases, a cash loan, it's quite automated as we speak already. But if you go into more complex lending situations, it usually has a lot of documents, a lot of information that you need to share and make sense out of it and need to crosscheck as well. Detect anomalies, does it make sense, et cetera. I think there AI can really support the advisor or the credit risk officer in making more efficient.

Ben Robinson: [00:52:54]:
Any more questions for our audience? Yes.

Audience: [00:53:02]:
We've seen so much disruption in the infrastructure side. Can the incumbents still make your way through or it's all over for them?

Ben Robinson: [00:53:12]:
Maybe just for the benefit, in case hasn't heard, who benefits most do you think from Gen AI? Is it incumbents or new entrants?

Susan O’Neill: [00:53:33]:
I can start. Look, I think for the incumbents in our area, the difficulty for them is that they're now trying to retrofit a system that was never built for the agentic economy. Whereas companies like Paygentic have been built from the ground up with this level of flexibility and granularity in mind. It makes it much more difficult for the incumbents to try and do that retrofitting.

Ben Robinson: [00:53:58]:
Is that why the incumbents are doing acquisitions?

Tom Williams: [00:54:01]:
I'd second that. If you look in our space, the front to back wealth platforms that have dominated operations for businesses of all sizes. The portfolio management systems, what are they all doing? It's an NCP server and then you can put AI on top of it. Again, just go back to that siloed point that you made, it just then accentuates the issues that are already in the business.

Ben Robinson: [00:54:27]:
But if I may push back, this is your hobby horse, but the incumbent sits on the most data.

Tom Williams: [00:54:34]:
Yeah, I suppose that's the point. This is where I would sort of push that to the infrastructure that can be deployed by the wealth manager to start having more control over their own data, ownership of that data as a core commodity from which they can then monetize that; as opposed to that being housed within, to your point, a system was never designed with that use case in mind. It was designed to automate and to support workflow and process, not to be able to house data in a way that you can then sweat it, including with AI. But yeah, agreed.

Yann Kudelski: [00:55:06]:
That's why we exist. We can help incumbents for their core systems. They sit on top, and we can help them take the leap to the next.

Ben Robinson: [00:55:16]:
Incumbents are real trouble unless they work with Additiv, right?

Yann Kudelski: [00:55:20]:
No, no. I mean incumbents, they also have some other assets as well. They have benefits. But we can help them.

Dorina Buehrle: [00:55:28]:
But I do agree. Wasn't that always the case that the new players iterated fast, paid fast forward, did all the mistakes, so then the incumbents could come and take the ready-to-work and proven models, either acquire or do the products that really are well running and proven.

Ben Robinson: [00:55:50]:
Slightly loaded question, but do you see a lot of consolidation coming down the road as incumbents react? Do you think we're in for a wave big wave of M&A?

Tom Williams: [00:56:01]:
Within what, technology companies or?

Ben Robinson: [00:56:05]:
Yeah, mostly thinking technology. We will point to a bit independent company in 18 months. Are you for sale at the right price?

Tom Williams: [00:56:15]:
Yeah, I don’t know. I think what's interesting, where we've seen it is with not an insignificant number of our most recent client acquisitions have been, those wealth managers have started working with an AI startup and then realized that they've slightly got ahead of their skis, and that they have invested in a tool that they cannot yet apply to their business. I would suggest that there's been a lot of money flowing into a lot of companies that put AI at the end of their website address. I would suggest that that's probably where we might see some consolidation. But again, it's probably got quite a long way to unfold in that particular cycle.

Dorina Buehrle: [00:57:00]:
I think naturally a lot of things will be combined, not necessarily immediately in an acquisition, but just where things fit together naturally. Then yeah, that's becoming more of an entity at some point.

Ben Robinson: [00:57:14]:
Okay. We're going to wrap up. I'm just ask you one final unprompted question, which is, it's nearly Christmas, nearly the end of the year, one prediction for AI going into 2026. It can't be consolidation, because we've covered that. Yann, putting you on the spot first.

Yann Kudelski: [00:57:31]:
One prediction for AI in 2026. That's a tough question at the end. We'll see more agentic AI adoption to get started with. I think it's a continuum, but I think we'll see more adoption.

Ben Robinson: [00:57:50]:
Your prediction is agents go from hype to reality. Not one is, but a soundbite.

Yann Kudelski: [00:57:59]:
No soundbite, but think the time is always the question, whether the 2026, 2030, 2035. We're on a continuum; we're starting with single contained agentic AI use cases and then start bolting on until we get more to the team of agents type approach. That's a continuum that we're going to see.

Ben Robinson: [00:58:22]:
Susan, your prediction for 2026?

Susan O’Neill: [00:58:25]:
From our perspective, prediction has to be around pricing and billing. I see a huge move towards outcome-based pricing, success-based pricing. That's where I see agents being monetized.

Dorina Buehrle: [00:58:41]:
I think that in 2026 the big models will come to a limit, and thereby the whole infrastructure will move way more to a diversified multi-model approach where we take advantage of what was predicted by researchers for so long already, but really take advantage of the numerous, numerous industry specific or powerful small models.

Ben Robinson: [00:59:09]:
Do you think open AI's valuation is under threat? You don't have to answer that one. Prediction for 2026?

Tom Williams: [00:59:21]:
Again, I'll just speak for our sector. I think there will be more reticence about jumping in and just investing in AI, mainly driven by the risk question that people will become more aware of. You've spoken about a lot, not only on cost, but also on where is my data. What are my staff doing with that data? Then how does the business catch up in terms of governance in a highly regulated sector? I think that that will be a big thing.

Ben Robinson: [00:59:53]:
That felt like one of your videos, but precisely what’s your prediction.

Tom Williams: [00:59:58]:
More use of point.

Ben Robinson: [01:00:00]:
More use of point, great. Thank you to our audience here in Geneva. Thank you for your questions. Thank you for everyone who listened. Most of all, thank you to our four panelists. That was a great discussion. This is the last one we're going to do in 2025 because it's December, but please look out for more 4x4’s in 2026. Thank you.

Dorina Buehrle: [01:00:21]:
Thanks for having us.

Subscribe to our newsletter

Join our newsletter to stay up to date on features and releases.

Thank you, we'll be in touch soon. In the meantime, you can follow us on our social channels.
Oops! Something went wrong while submitting the form.
Email background home page