How Santander built a cloud-native trading platform at scale

Good morning, everyone. Welcome to this FSI 311 session. Thank you for being here.

Today we are going to talk about how Santander has built a e-trading cloud native platform on top of AWS.

My name is Juan Enrique Gomez. I am a Senior Manager for Solutions Architects working with the Santander team for the last four years. And with me on the stage today, it will be Jon Cas. He is the CIO for Capital Markets at Santander Corporate Investment Banking and also John Hendrickson. He is the CIO for Fixed Income also in Santander CIV.

Please Jerome, like, thank you.

Thank you Juan. So thank you to give us John and I the opportunity to, to showcase one of our main initiative in Santana around the trading platform and being supported by AWS.

So before diving into the topic and the more technical stuff, um just a few words about Santander. So as you may know, Santander is a Spanish bank, European bank very present in many location and in Europe and Americas and the unique presence in Latin America.

More than 160 million customers, 200,000 employees and John and I are part of the C ID capital market structure for which there are close to 7000 employees and 40,000 clients, mainly corporate and financial institution clients.

So to, to today, we're going to talk about initiative that we started four years ago around the, the electronic fixed income platform.

So at the time we had a mandate, coming from the business because everything is as a business rational when you are in a when you're in a bank.

So first darwin is, which is the name of the application and the trading platform is basically providing products, financial products. So rights and credit products such as bonds and swap to corporate client or financial institution clients. And we are sending prices through what we call multi platform such as bloomberg and web and basically offering those products to those clients.

So here you can see that technology is key for, for the business. Actually, technology in itself is the business. And therefore four years ago, we had a mandate to obviously go for speed for time to market, increase the time to market compared to what we have initially to increase. I say the cost effectiveness trying to, to go from more change, change the bank as opposed to spending too much on the, on the on the bank and obviously having a kind of ac a cd mandate where we're gonna have more and more delivery.

So i if we fast forward in that journey in four years and journey will go into more details later. We are as of today with the help of AWS. So we have a team of 50 the developers or business technologies, we have 22 a supporting the cloud in europe. And in the us, we have also pushed for an ibd model in between in house development and leveraging a couple of fintech on the connectivity side and on the uu i side, we have obviously diverged a lot the, the the services from lws, it has been proved very efficient.

We have gone through the c i cd journey to automate more than 5000 tests before go live and we end up having 300,000 lines of lines of code.

So now I give the floor to John, we'll dive a little bit more on that journey and thanks thanks to all of you for coming.

I'm gonna talk through a little bit the journey we, we went through. And we started before, before even talking about the technical implementation, we started about thinking about how, what's the organization we need and what this is, what is the set up, we need to, to do this project.

We knew it was going to be a long project and, and very complicated. So we, we wanted to partner with our business. We needed, we need the business to support us with funding and an understanding through many years. And, and as John said, we're in year four and we still have years, years to go.

But, but also we knew it was gonna be complex because it's technically complex and, and functionally complex. So we, we implemented a product based organization where we created squads for teams based around the products. In our case, it means c ds s bonds swaps. And we have a product owner, typically someone from the, from the business side leading the priorities for each squad.

And then you have this autonomous squad structure. And when we thought about how to start the squad, we wanted to have as much autonomy as possible in the squad that meant business analysts develops qas front and back end to avoid having too many interdependencies with other teams.

So we created this start up environment. So you really, you had an autonomous squad and a and a business owner just getting on and working with stuff and, and that helped us to attract talent, retain talent. And also we, we created this partnership with the business that they understand what, what, what we're doing.

Much better. We serve users in, in investment banking, more, more or less all over the world. At the moment as, as of today in, in darwin, we, we have users in, in the uk in spain us. And yeah, the short term road map for next year is expand further into hong kong, mexico and, and poland.

So we knew from the beginning that we needed to do a global application and we, we'll touch on that on that later and what that means.

So from the very, very first beginning, after having the sort of organization set up already, we started thinking a little bit about the architecture principles that we wanted to apply.

So we, we, we, we started off saying that we, we thought the right high level architecture was a microservices architecture. And, and one of the motivations around there was we, we knew we would have multiple platforms, no c ds bonds swaps with users basically all over the world.

A lot of the components were would be the same components. So the same business functional context across all these different platforms. But in terms, in terms of deployment and agility, what you don't want is to turn off a whole platform, you have to update the small components, particularly if you have users around the world, it's very difficult to find these windows.

So microservices architecture helps us with agility, we can easily find one piece for a particular platform, update it, restart it without affecting like the whole the whole platform.

We also thought that microservices would help us with efficiency. So we have a large relatively large team. So storm said we are around 5050 engineers and most of them are, are still building the software. So with microservices, you can really have one portion of your team developing one functionality and another portion of of the team developing another functionality. And as long as you have a very clear api then there's no reason to step on each to knock it helped us to, to be efficient in terms of the team structure.

Most microservices, we have a reare quite lightweight so we can run them active, active, maybe across multiple availability zones. So that helped a lot with with having a very full tolerant and resilient system and the performance is is obviously key.

So with microservices, you can scale only the microservices that need scaling in our work is the monolith where you have to scale all.

So that, that, that, that was also one of the reasons of going with this architecture. And in terms of quality and testing, as John said, we, we spent a lot of time doing a testing framework and i'm gonna show you that later and microservice is is is great for that because you have a microservices which has a very fine defined functional context, input contract, output contract.

And you can put test cases around that. So you can be quite sure that whenever someone has touches microservices, you run the test cases and you know that the microservice complies with the contract and it makes testing a lot easier with this idea, with the domain driven design.

We, we came up with the idea of thinking that we wanted this close link with the business and often business and, and it, we, we often talk very different languages and domain driven design really helps you because you, you take the business terminology and you bring that into the it world.

So when you think about the naming of the microservices, you name them according to the business concepts and the same thing you do with the class names or maybe attributes names.

So even so if you take a, a trader, they would understand our architecture as an rfq service, an rfq service have classes and here are the classes that have actually functional names. So it makes it easier to have this conversation between, between the business and in it.

We, we work a lot with events. So if you think about a trading platform, basically, you have events which is typically the market updates. So price changes happening in the markets and you have events as clients wanting to, to buy things. So a typical rfq.

So we, we implemented in our event driven architecture to handle that. And I'm gonna go through that in a, in a bit on how we used elastic cash to, to implement this.

We, we decided very early on that we wanted to go cloud native. And there were many reasons we wanted to go cloud native and we, we, we, we, we felt strongly about it. We, we said we don't want to look after hardware, we don't want to do the heavy lifting. We want to focus only on the delivery of code. And I think that was one of the key drivers of us using cloud native.

So we use a lot of manage, manage services. Another reason is we, we wanted, we, we knew that this platform is gonna take a long time to implement and it's gonna stay with us for a long time. So we wanted to make sure that we could leverage on any new technology coming out from the, from the cloud providers. In this case, AWS and they innovate much faster than we can do in our team.

So whenever there is this new stuff, and we see a lot these days, whenever there's new technology or new capability introduced for us, it's very easy to incorporate that into, into our into our ecosystem.

So if you take the, the architectural principles and the idea of domain dream and design, this is how it fits together. I know you, you end up with an architecture where you have micro services with very functional names like client api rfq engine, market data, et cetera, each with a very isolated and functional boundary.

I'm gonna go through a little bit more detail now on the implementation of a specific microservices as to go through the technologies that we use.

So if you think about the most, most microservices that we run, they are shielded by an application, a load balancer, particularly if you think about the interaction with the, with our users, the microservices are shielded by an ndx server.

So typical, typical server walls that provides the htp or htps interface goes down to our code, our implementation. So that's the call it a functional implementation of all the microservices.

We then that microservice it might talk with rabbit mq. So we, the moment we, we, so we started with rabbit mq and we're looking a t maybe going to amazon mqat the moment, but we still still on rabbit mq.

So rather than q we use for anything that needs persistence and guaranteed delivery. So if you think about a, a trade, for example, you executed a trade, you don't want to lose the idea that you have closed that trade so that we use rabbit and q for that.

But then we use, we use other services for more volatile data. So they, they sit behind a network load balancer and we use elastic cache, the red this implementation for this.

So I think about um price updates, for example, price updates or fq data coming in. Uh and any other market data that we need, it goes through redis we get very good performance if you lose your market data update, I mean, it's not ideal but it's not the end of the world. So we, we use it really for the fast and volatile stuff. And then we, we have another ndx reverse proxy. If you think if you've been working with microservices, you know, that's a typical pattern to protect a little bit and allows you to do a little bit of a more clever routing and, and uh throttling, etcetera.

And we own a opensearch cluster then for, for all the logging to get the information and about the performance and the behavior of the system. Um so if you take, so we go now to the implementation of the microservice. Now, now we're looking a little bit broader. How does the these microservices fit together in a more uh wider architecture?

Uh so the font end, we got two types of font ends. Basically we're using open fin. So uh open fin is a, a fin tech and they provide a a um a desktop application which is basically a secure browser. I think you think of it as a google chrome but it gives the experience to the user that it feels and looks like a thick client. Uh so they, they have that experience but it works but really it works like a normal web application.

Uh and then we have an excel plug in, I mean, you you can't get traders away from, from excel completely. So we do, we do provide some information to excel for, for traders to use these two, call it the user contents, they connect back to our microservices through what i said before the application load balancer. If we're starting up from for the first time, the load like the static data from, from a tree, they about the images and wits etcetera for the front end. If the user is not authenticated and it's to authenticate we, we're using uh amazon cognito for this.

Uh they get authenticated and then the microservice needs to do something. It needs to either send an order because it's closed in order because that rabbit and q if, if we need to write something to database or read something from the database, i think about user preferences. For example, is a good example about the layout of the screen and the colors or, or trades themselves, we write and read from by number to b. So that's our persistent storage.

And if we need to connect back through like the corporate network. I think we have some systems like two compliance that sits on prem it goes through transit gateway and all the way into our corporate network. And then we, we use private link for our connectivity through to the markets. So we we use a fintech, another fintech called trans.

So trans provides us the normalized connectivity to to the marketplaces and they give us that service uh through private link. And so in our account, we have a clearly defined api we connect to that. And then from there, we we go through the market and uh of course, we use um elastic cache, uh elastic cache for all the fast, fast information.

So this is the high level architecture. Uh and then where you see like the microservices in the middle, think about that multiply by 1020. Uh because we run a lot of different microservices. And then i think this, this, this point is very important. And then when we started the project thinking about the year, the idea that we needed a global platform, we needed that glo global platform to support multiple products and users all over the world. You quickly end up in your head with, with an image of hundreds, many hundreds of microservices running everywhere with very short windows of maintenance to do updates. Um and you can't do that manually. Uh you have to automate.

So we invested a lot of time in automating everything. So we, well, first of all, we started with the testing framework. So you have the typical testing pyramid where we automate like the unit, testing the services and the user interface. And we are quite strict in making sure that any new functionality we develop has the full coverage of the testing framework. Uh but it's not enough there because you also need to automate the deployment.

So with the automating deployment with very large infrastructure in the cloud, you do by code obviously. Uh we we started using the cd k. So the cd k, it allows you to basically generate cloud formation but from the language of your choice. So if you're a java developer or do not developer, you don't have to learn cloud formation. Yeah, you you implement that in the language you used to. So developers typically love it and we, we do have quite, quite a lot of code in cd k. I think all the microservices and all the network connectivity and all the private links, they're all modeled in code.

So i think we generate probably a few 1000 lines uh of uh cal formation from, from cd k and then the build pipeline, what it does is it it goes through the cd k generates all the all the cal formation cal formation deploys our environment so that sets up the whole environment. So we can spin up a whole new darwin environment in a question of minutes. And then the first thing we do is we run all the tests and so we test that everything is working fine. And uh and from then on that on, uh you go to production and even in production, we automated everything like you. When you're running hundreds of microservices in production, you can't have anyone going inside and touching stuff.

And actually what happens with managed services is you can't anymore because a lot of the controls like you don't log into machine with a root user and do stuff and the management tasks are typically provided by a ps. So you end up also automating all the management tasks and you drive them from your cdc d pipelines. Uh so that makes it uh very easy for us to release frequently. I think that that's one of the key things we, we, we almost is gonna touch on that uh later uh when it comes to the business benefits. But we, we've seen a drastic improvement in how often we, we release the production.

Now the um the group landing zone uh uh juan enrique is gonna talk about that is is using uh terraform. So we have a, a bit of a mix like the terraform um automations in the, in the landing zone. They do provisionally basic infrastructure for us, the security controls, uh the vpc set up, etcetera and then the more specific application uh configuration in the pipelines is done by cd k.

Yes, very, very briefly here. Um i was gonna go into a lot of detail but this is what the, what the user interface looks like. It's um the open fin navigator that you see, uh which is a clear, feels like a thick, thick, thick lion. And this is the one that then later connects back to uh to the microservices. So i'm gonna hand over to, to juan rick again to talk a little bit about the container platform.

Ok. Now i'm going to different services that the the team from, from santander are using to run this uh this platform. I think that there are several things which are really important, which are the requirements that they have described through their explanations, no uh sp agility cloud native and this is critical when it comes to make a a efficient.

So the first thing is they need to support this domain driven uh architecture based on microservices. So in the end, you need somewhere to run your your containers. You know that in aws, we offer a breadth amounts of different services so you can run containers on different types of platforms. So we have govern, we have amazon eks, we have ecs, we have amazon uh ecs. You can run that based on easy two or you can run that based on target. But when you get all the requirements, mainly the business requirements that starts to give you some clues. Where do you where you should go.

But at the same time, santander is a highly regulated customer as you, most of you probably know. Uh so they need to meet all the compliance risk, uh resiliency availability. All these things needs to be taken into consideration. It's not only about providing the business, uh fastest uh access to new functionalities and so on, we also need to take that into consideration.

So when you, when you look at the need for speed, uh one thing that always comes to mind is simplicity. We have cobern, it's a great tool, it offers you a lot of functionality. But is there any value uh for the developers to start to talk about which network or cn i driver? I'm gonna use? Mm maybe not because we don't need that in this uh specific case.

So the initial idea as john has described is to give the power to the developers to meet all the business functional requirements. So this way the developers should just focus on writing code de plugging the code following all the c i cd strategy that john has has described.

So do we need really to run on cotes? And by the way, santander runs thousands of coors clusters? Probably not, we don't need any specific thing from coti. So they made uh the first eic the first deficient was to use ecs. Why? Because when you use a tool like cbk in this case, or terraform, that's not the, the, the the key aspect here developer just has to say, ok, i need to deploy this code. I need this amount of resources, this amount of computer resources, this amount of memory. And i'm going to use this role to execute this code because i need that this code talks with this other part of the of the platform. Ok? Can ec a do that? Yes, ecs can do that. Ok?

So that is very simple and i can use a tool like cd k to give that uh capability to my developers without relying in any third party or other uh teams. Ok? That's fine. But now we have additional uh decisions to make, ok? We still want to keep that simple and we want to meet all these, you know, compliance availability, resiliency uh requirements.

When we talk about ecs, same thing that when we talk about eks, you know that you can run ecs or eks based on ec2 instances or you can run that on far as uh they explain at the beginning, they started four years ago. So the, the, the the landscape of containers in aws has changed during this uh four years. But now we have a very interesting option which is far.

So if you decide to run your content based on, on e two, it's true that all the complexity of deploying the clusters and so on. Uh all that heavy lifting is on us and you are. But anyway, you are uh taking the responsibility of managing where the notes, sorry, the notes where your containers are, are running

So you still, if you, if you choose easy two, you still need to do the sing. Ok. How many traders uh in which time zones, how many uh prices are we are going to manage? How many tasks do we need to run? And so, so you still need to keep doing the sing, but it's easy too.

So if you know, and when we talk about the share responsibility model and we talk about easy two, you still have a lot of responsibility, you still have to do all the pattern. You need, still have to do all the operating system upgrades, you know. So that again means that you need to have people maintaining your platform. So that's making the things not as simple as they would like.

So the next decision that they made is ok. Let's look into fargate. Can we run fargate? Maybe let's look into it. So using fargate is allowing them to get rid of any heavy lifting on maintaining the platform.

So the ideal world where the developer just describes, ok, i need to deploy this code. I need to this amount of resources. It's possible. Nobody needs to take care about the sighing of the platform. Nobody needs to take care about patching the operating system done.

So that is bringing to santander in this case, a lot of flexibility in terms of uh reducing the bau so they don't need people maintaining the platform is helping them to deploy faster the uh new uh new functionalities. But at the same time, it's fully compliance with all the requirements from the regulators, from the internal corporations in terms of risk and uh and compliance.

The other thing which is i, i think it's very interesting is that ecs in a very simple way allows them to be fully integrated with other a double gs services. And that brings me to the original name of the session. If you remember we say a cloud native platform because there is a huge re uh there is a huge difference between running things on the cloud and being cloud native.

So being cloud native means using the maximum value that the cloud can provide to all of you. So using things like uh application load balancer, using configuration services like parameter store, all of them fully integrated with the containers platform. It's really valuable for them because they don't need to build integrations with that on premise tool where they store the secrets or they don't need to deploy these appliances which are the usual uh uh sorry, the usual appliances that runs the load balancing uh on premise, they just describe the developer just describe ok. This is my code. This is the resource that i need. But by the way, i need also a load balancer. I need also a parameter stored in on this platform and it's fully integrated.

So it's making the developers life much easier than having to be integrated with uh other uh other tools. The last comment is they are not fully yet serverless. So they are moving to this far uh strategy if i'm not wrong. Most of the linux uh it's uh already running on on far that aligns very well with the comment that jon did at the beginning, which is cost efficiency.

So i don't need this big easy two instance to run my containers. I just to define the task. How many tasks, how do they scale done? So they, they are finding a lot of cost efficiency using this service platform. They are on their way to migrate also the windows containers to be to to use target.

You know, as i said before, the the the the journey or the evolution of the containers platform on our us has changed a lot during the four during the last uh four years. So containers is why because they comply, they, they are fully compliant with all the risk and compliance uh requirements. And also because they are simple and they, they give a lot of power to the uh developers.

Ok. The the next topic um and john has mentioned uh some of these uh things is how do we manage the the storage? Ok. All of you know that manage services here are very uh important because in this case, aws is taking care of the of the heavy lifting. But i think that it was mentioning that uh manage services are especially useful when uh it comes to things like elastic cuts.

In this case, elastic cuts uh for ready, why you know running distributed systems which runs in memory? It's really difficult maintaining these systems, patching that systems and so on. It's really difficult and it's very risky to do that, especially because in the case of the of this platform, these systems are the system that are providing this data to the traders in real time.

So they are getting this information and making that information available for different services in in in real time. It's true as, as john mentioned before that, ok, maybe we can uh afford to lose 11 price but we cannot afford to lose all the all the platforms.

So leveraging services or manage services for ne a were just like amazon elastic cs, i think is really valuable for, for them. And by the way, now we are launching the serverless version. So we will talk about that probably in a, in a few weeks.

Ok. uh the, the other services that they use, i'm not going to spend so much time here. But the the other services that they use for the storage are uh dynamo db uh you know, non sql database highly efficient. But i think that it's very well aligned with their strategy to be present across the world.

So they, as john explained, uh they are present in different markets in the in the world. So dynamo db provides them with high speed uh persistent storage for their, their data, but also will allow them to you know, deploy across the the the world finally opens. I will talk about open uh later and std will take care of all the static contents.

The other part that i think it's it's really important is how santander has managed to provide to all the applications with some specific capabilities that are important for a regulated entity. And i'm going to spend a bit of time in two different uh aspects. The first one is the connectivity or the networking and the second one is uh security.

You know, usually when it comes to talk about connectivity or security, we spend a lot of times with the different financial services uh customers. So the santander group has started to build their own uh their landing zone a few years ago. And one of the things that they are providing is this connectivity to a different application.

So how the connectivity in santander uh works, they have different uh phones divided or on the on the on the connectivity. So they have their own premise systems, they have their a double systems and they have the internet and also the the third parties. And as you have seen before, we have traders, we have on premise systems on their uh private cloud. We have third parties like trans or bloomberg or so feeding data to uh the platform.

So imagine if when you start this project, you have to spend months or even years building all this uh connectivity, this connectivity is available out of the box in in in the santander uh landing zone. It relies on transit gateway, transit gateway is the central piece which takes all the decisions about how the traffic flows between all the systems involved on an, on an application.

Of course, they are using direct connect uh to, to connect sorry with the uh on premise system. So all the traffic that flows between the on premise systems and anything that runs on aws is fully encrypted when they started to build the platform. uh four years ago, we didn't have, at that time, mactec supported on aws.

So they use the typical architectural pattern based on a transit bbc where they get all the traffic that gets out of aws or gets in into aws and it's fully encrypted. In this case, they are using fiscal appliance to do that uh encryption. They are now in a project to migrate that to use uh mac set and get rid of that uh appliances. But that way they are guaranteeing that all the traffic that goes to their on premise uh systems. It doesn't matter if they are the trader's platform or it's the, their private cloud. It's fully uh encrypted and fully managed by the uh team.

The second thing is that once the traffic arrives to or reaches uh aws, this is match the transit airway. Why this is important. This is important because there are some specific rules which are mandatory. By the way, i didn't mention this before. They have, they have some teams which are global, for example, networking and security are glo global functions into the the banks.

So you need to be fully aligned with that uh teams and the policy that, that teams uh dictates. So when the traffic arrives to aws or the traffic wants to get out of the aws uh accounts or platforms, it needs to be inspected. Ok.

So if you go to the internet or something comes from the internet to you, it needs to go through the north firewall that goes through the transit railway. So all the traffic from the internet comes in through the uh bp. The ingesting, let me say the ingesting uh bbc goes to the transit gateway, the transit gateway goes sends that that traffic to the north firewall.

If your application wants to speak with a third party like bloomberg, for example, that traffic that gets out of the bbc goes again to the transit gateway and the transit gateway send that traffic to the north firewall. The third parties goes also through the north firewall and the traffic gets inspected, approved or dropped and then done.

So transit gateway in this case is providing to them this kind of flexibility also for the traffic when they need to speak east to west. For example, if they need to speak in the future with other regions or in in aws, because they are deploying additional capabilities over that regions. It will be, let me say easy, just all the traffic will go to this transit gateway and then they will make the efficiency about the traffic in this transit gateway.

So this is something that i have available out of the box on aws. This is run by a global thing, but it's not impacting them in any way. They just plug in the application and everything. Let me say works like money like magic. No and that uh on that case.

And finally, john also mentioned that they are using private link to connect to these third parties.

uh they also in some cases not in this use uses private link to connect to other applications inside the the the bank. so one of the things that they are providing also to the different projects in the bank is that they have an account which is the serve services account where they can plug in these private links and get access to that uh services or data or whatever they are publishing in that uh private links.

and again, as when you get out of your b pc, you always go to the this transit gateway, your traffic gets inspected so i think it's a pretty smart architecture, what they have uh implemented uh today.

so let me go to the next part of which i think it's even more important because, you know, in the end, they are a financial services company and they need to be sure that everything is in place and they then that, that they don't make any, any kind of a mistake.

so i'm going to start about the a very classical problem that we find when we talk with financial services customer, which is how do i give the right autonomy to the developers to define roles? ok.

so imagine you have your development team, they have built this microservice and this microservice needs to talk with the dynamo divi table in the usual world. usually probably most of you know this and i have suffered this. you have to go to your itsn tool, open your ticket, describe ok. i want to talk from this microservice to this table on dynamo db. i want to run these permissions and i want that this role is being assumed by this uh pot or task or whatever you are using that, that rock.

so someone will read that ticket and maybe two hours later or two days later or two weeks later, we will answer to you giving you the the draw that doesn't make the things very easy.

so this approach that the santander has put in place gives a lot of autonomy well, not a lot, the right autonomy to the projects to create, for example, the roles. so if a developer wants to create a role where they define this policy, no, i need that. my my micro ser my microservice sorry speaks with this dynamic in table. they can do that. ok?

but that's a risk. he could elevate his privileges to do something. not very interesting. so what they have done is that they have use using sorry um service control policies inside the organization, they are limiting the access to, let me say risky operations. for example, the projects cannot execute any action that will modify anything related to the network. so that's done only by these global guys who runs the, the the network.

but also when they give this autonomy to the projects to create these roles, what they are forcing to the is if you create a role, you need to attach a boundary. so they have defined a boundary which restricts the power of the role. so it doesn't matter what you, what you put in the role, you can give full permissions in the role. but as you have to put this boundary attached to the role, your role is going to be limited. it doesn't matter what you put uh over there.

so that way the security uh people in, in santander feels pretty comfortable because they know that nobody can get out of the of the boundaries in this uh in this case, but they are getting a huge value from that because they are giving a lot of autonomy to the to to the projects to not rely on a third party who needs to approve to build and uh that, that role.

and, and so, and, and that's very uh important. i have already talked about the, the networking and the fiber are not going to exist in that.

i think that the two other important things is that when they started to build the uh the trading application at that moment, the identity was a challenge. so they decided to use cognito fully managed service. it was or it allows you to define your identity, uh practice your, you know, the author, the authentication, the authorization mechanisms fully integrated with your application so very easy.

now they are in the path because now the landing zone is providing this identity services in this case based on uh microsoft id and now a u id or formally azur uh ad. so they are in the path to migrate to that uh identity strategy.

and the other important thing is encryption again, this is something that is defined by the global security uh team and they are enforcing all these policies. so all the traffic in transit encrypted, all the traffic addressed encrypted. and there is a policy if you want to deploy an evs volume or you want to deploy an s3 packet which is not encrypted, it is not possible. it's not allowed we have this service control policy and it's not allowed. you cannot do that again. it's all about simplicity and providing them with this speed through autonomy.

ok. so i think it's uh very interesting. same thing happened with the certificate is something that usually we don't look too much at, at, at it. but it's very important. probably a lot of people in, in the room has gone to, has gone through this situation where someone forgot to renew a certificate or they renew the certificate, but they forgot to update the certificate on the load balancer.

so i think that that's a very tiny and very simple thing. but using a certificate money for nato use, it's allowing the santander team to forget about the management of these certificates.

so it comes to the original statement. no, keeping the things simple, keeping the things uh really fast and just uh to, to close um monitoring. they are using two main pieces to, to, to uh monitor the the platform cloud bots i think is pretty important because, you know, having available on real time kpis regarding, for example, the performance of uh a piece like red, i think is critical for them. you know, the the the the the the prices are really important, the the speed matters in this uh in this case, but not only that, for example, other interesting pieces, having an end to end vision of what's going on on the platform to be really quick on detecting any kind of, of problem is important.

so using pieces that are provided by cloud bots like cloud bots, containers insides or using x ray. it's really interesting where you can see the end to end of the uh platform uh that you are uh running.

and then they are also leveraging uh opensearch. opensearch is ingesting all the application laws. they are using kana to build this uh dashboards and they are not using only the kana dashboards for technical kpis. also some business kpis are built and included in this uh dashboards.

and finally, um uh john already mentioned this, this part. the all the ingestion from these third parties uh from trans, i and raffin and bloomberg are done through this uh private link as i explained before. they have, you know, this uh serve b pc services where they can plug in these third parties today. they are in these three ones, maybe in the future, they will need to use additional ones. uh but that way this way, so they are able to warranty that all all the traffic is managed and is inspected and goes through the right path inside the, the the platform.

so now to just close the session, i'm handing over to get on. thank you.

so, before we need a little bit of time for uh for q and a, um if i step back a bit and, and look a t that uh journey four years ago when we started uh in santander, uh basically, in at least santander cib were mostly a vendor shop to, to so to start such an initiative, which is checking many of the boxes of transformation, whether it's a cloud native c i cd gdt microservices, especially in the context of uh electronic trading or free trade activity market. let's say that uh many people did not necessarily believe that you couldn't make it. ok. and uh and four years later, we did deliver on our promise uh which is basically focusing on added value, business features, making them more reliable and delivering them much faster. ok. and so uh at the end, we added value for, for the business. and ultimately, this is what uh we wanted also worth mentioning that um at least in europe, four years ago, maybe it's changed a little bit uh uh since maybe in the us may be different, but i don't believe there are that many capital market activity that have implemented a club native application in the pre trade space. i don't think there are too many. so for that reason, i mean, we are pretty proud of what we've done now. as you've seen, it's pretty complex. the journey is not, is not over because we have more asset class to, to implement more geographies to, to hold out. ok. so the journey actually is never going to be over. it's more, it's not a project. it's like we say, it's more product. so there are many more things to come.

so here you have a list of things that have already been mentioned by juan and john. so i will not go over all of them. uh necessarily, i'm just going to probably point out a couple, a couple of them.

i think uh santander initially, it was really about uh having uh uh an efficient training platform to compete with uh with our competitors. uh and the strategy evolved a little bit and we have aaa strong globalization agenda. so the cloud obviously is a strong enabler in that globalization agenda that the business that the business have.

now, it's worth mentioning as well. that and i think juan mentioned it a little bit. security regulations are specific in some countries. and therefore, even though using the cloud should be a push button from a technical standpoint, the implementation is always a bit more complex and that's something we are actually facing as we speak.

also the landing zone that juan mentioned um at the, when we started the, the the project, um we are probably a little bit ahead of the curve uh vis a vis the the group landing zone. so the, the, the, the question, the maturity of the landing zone vis a vis the project delivery was um has been, has been uh uh let's say a struggle at one point uh for us. and that's why it's very important to uh include very, very early in the, in the, in the start of the project, all the co uh teams uh the cto team to, to uh to address this as uh as positively as uh as possible.

and then the i will finish with the last thing which is probably about uh the talent, the search for talent um to have a very uh strong uh cloud engineer. um actually, over the last four years proved to be a, a difficult task, at least in europe uh in between spain and the and the uk where most of our forces are. and this is, this continues to be uh to be uh um uh point of attention for us and to, to search for, for talent.

so just to conclude, and before we open the, let's say, the room for q and a, uh this is just the beginning of the journey even it just started four years ago, opening the door for many more assets in the capital market, but prove it again for in front on that transform to with a ws was actually a success for us. so leaving the door for many, many more success. thank you.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值