Northvolt’s software-defined factories

Hello everyone. First day of Free Invent, I hope you're all having a lot of fun and learning a lot. My name is Kartik Krishnamurthy. I'm the Strategy and Business Development Lead for Smart Manufacturing Supply Chain and Sustainability at AWS. And uh before we bring on a wonderful customer, Northwold onto the stage, uh I would like to take a few minutes to talk about few industry trends and challenges that we are hearing from manufacturing customers and what AWS is doing to help enable manufacturers in their transformation journey.

So manufacturers recognize that they have a wealth of data and they would like to make use of all that data to improve their production operations. Now, with advancement in key technologies, they see that data and advanced analytics are the next big enablers. And they would like to make use of that to become more data driven and improve their operations and efficiency as well as they plan and build a strategy to execute that they are looking at becoming more digitally executed manufacturing, which essentially means that they are looking at uh critical metrics such as oee oral equipment effectiveness uh to be one such where they are not only um gathering that data uh to create visibility but using that to control the production as well by optimizing at the at real time for bottlenecks and thresholds.

Now, manufacturers also want to get to know their customers a little better, their end consumers behavior a little better. So they can feed that back into product engineering and manufacturing, creating a flywheel and they're able to iterate faster so they can become more consumer centric and they can also maintain and gain that customer loyalty in long term.

So one thing that um the covid-19 and geopolitical events have taught us and the manufacturers are seeing that in in their um smart manufacturing strategy itself is supply chain resiliency. So everything else that we spoke about as a fly wheel can be really successful. But then if there is a supply chain fragility that can put a stop on everything else. So supply chain resiliency is becoming a part of the overall smart manufacturing strategy and then uh manufacturers are the leading consumers of energy. So the inclination towards using uh or being more sustainable and then lowering cost as an outcome and also trying to be more environmentally conscious is something that all our manufacturers are trying to do. And then uh environmental regulations and policies are another enablers that is pushing them towards uh having uh sustainability as part of their overall strategy itself. But then we see manufacturers using that not as a push but rather a transformative way and innovative way to leverage that for their advantage as they go through in building that overall strategy itself and try to implement it.

There are some of the challenges that manufacturers face data access. We have brownfield customers who are looking at legacy equipment installed over 20 years, 30 years even beyond. And they are trying to integrate that with new uh equipments and bringing all those data silos together as they do that, they not only want to create that access from their it and ot systems, they also want to create a real time access that with low latency and high availability. So they can use that data transparency to control production. They want to do that as they do that, they are looking at organizing a large amount of data together uh with standardized way and also across all the disparate silos. They need solutions that can address the overall aspect of organizing this large amount of data.

Now they are implementing that in one plant, gaining trust on technology, evaluating the roi and then expanding to multiple different plants slowly as they do that, they have to think about the scale of it. And also as they do that, they have to think about what is the secure way, the most secure way of doing it. Because now we are looking at factory floor systems iot factory edge systems and on prem systems and you are going to interact with the external environment. So how do you do that with the most secure way i dc survey says that more data is created every hour today than an entire year in 20 year, just 20 years ago, 68% of the available data goes unused.

We have a deloitte survey that also says that manufacturers generate over 1008 100 petabytes of data. 90% of that data is not used or doesn't add any value to the business. I wanted to ask you please raise your hands if you are right now building a smart manufacturing strategy with an aim to create access to this trapped data in the factory floor to to use advanced analytics for better business outcomes. That's great to know.

So you are right now somewhere in this journey where you have started on your strategy, you're looking at data silos, stand alone applications and uh poor upstream downstream communication. As we talk about level zero, there are sensors actuators plc s all the way up to er p and you are looking at uh disparate machine communication protocols that you have to address to create that access to all the data silos some of our customers.

Uh the brownfield customers, they are looking at going past that challenge and have created that itot convergence where you have, where they have created that flexible chlo connectivity and they have gone past the descriptive protocols and created that system integration at this point. They are implementing point solutions for some specific use cases across a few plants. And that's their way of earning trust on the technology and trying to scale that across other plans. These are point solutions that will inform them of offline analytics that are not directly tied to production.

The next step in that journey is to go towards a smart factory phase where the tic convergence has happened and the border is gone and there is an any to any communication possible across the layer that you see on the left side and uh data transparency and decoupling has been achieved. And the customers are thinking about edge to hybrid model as well. At this stage, they are looking at deploying solutions at scale across their plans leading into the enterprise level uh visibility as well. And this will give them an ability to not only connect the factory but also control the factory. As you go towards the control side of it, you are leading into a software defined factory itself.

How is aws helping in all this a has the capabilities to help across the collect store, analyze and act face of your overall architecture. What are those factory floor systems, the edge systems and the uh smart products that we are dealing with in the factory floor itself, operational data historians, scada, mesplm er p and whatnot. There are three different sets of services across factory edge on premise and cloud that is helping in gathering all this data from the factory floor itself. And then we are looking at services across the collect store, analyze and act where we work backwards from the use cases. So we can deploy solutions that gather services across each of these blocks. So you can build the right solution and scale it accordingly.

These orange boxes that you see there on the top, the business functions itself, they will have hundreds of use cases that are lined up in each of those orange boxes that calls for the right level of analytics uh services that are needed to address those. This may include a im or ja i services as well. A i, we have purpose built industrial services such as lookout for equipment, lookout for vision and many others that are coming into play in this specific area. But then analytics needs a lot of data that is stored in a secure way and organized in a proper way to scale efficiently. So that's the storage layer that we are looking at. And then there are there's a collect layer where we are looking at services such as iot sitewise that helps in the co the collection as after the factory edge on premise and cloud happens are greenfield customers who are builders at heart.

So they are looking at this particular stack and leveraging or using all these different services in a way to build towards the factory of future. You will hear north world talk about that a little bit. Um very soon our brownfield customers, they have to deal with the legacy system. So they are looking at modernization and migration along with the overall stack uh refreshment. Also one thing that we are learning as we are working through this with multiple different customers is brown field customers. They look at quick test and learn as a way to expand and build their strategy. So they are not interested in services that can be stitched together to build solutions. But rather they look at it from an end to end solution perspective.

So what that means is they the line of business personas, it can be a maintenance engineer, it can be a production engineer, a plan gm or a vp of manufacturing. They are looking at aws to come in with end to end solutions so that can help them address specific industry use cases. So what we have done is work with our partners to build solutions that are using aws services in the back end of their architecture. So customers can actually have these solutions implemented at their industrial environment with minimum customization. Those are specific use cases that have been called out under the smart manufacturing.

For example, one thing to add on that would be um end to end solution is one aspect and line of business person are bringing them into the pocket surface, one aspect but then line of business uh being part of the strategy and then it of manufacturing transformation coming together has always been a winning strategy when it comes to smart manufacturing itself. And I'm sure many of you will agree to that.

Finally, sustainability. While regulations and uh sustainable operations and sustainable logistics are putting that emphasis on making that code to the smart manufacturing strategy, our customers are looking at how this can become transformational and innovative. And uh how can we, how can we bring sustainable products, sustainable manufacturing, sustainable supply chain into the mix? So whether it's an outcome such as esg reporting or circular economy or even decarbonization, you look at data and go past the um incompatibility and uh uh the level of standardization needed. So you're able to create that visibility to get the business outcomes.

So with that, I would like to get Northwold on the stage, Marcus from Northwold. We'll talk a little bit about that journey. Northwold is an AWS customer who's at the junction of smart manufacturing and sustainability with proven success in not only collecting data but also controlling the factory. With that, with that, I would like to pass over to Marcus. Thank you Kasi.

So that's the special battery chemistry that's really important for performance. You see the cell assembly also on the right and all the way to the left, you see the recycling operations. So you have a single site with multiple parts of the value chain. So this is a co-location of operations that closes the loop of circular manufacturing.

We are effectively replacing a global supply chain that normally spans multiple continents with a few minutes walk and recycling as being part of the Norfork sustainability model from the very start. And that is why we can set this target for 2030 of new cells that have been produced to use 50% of recycled battery materials that combined with 100% renewable energy means that we can set the target for uh 1010 kg of carbon dioxide equivalent per kilowatt hour.

What does that number mean? Uh that may be a metric that uh not everyone is familiar with. So if we take a look at the or consider the industry benchmark, what would that be uh by your hands? Who thinks that would be around 20 maybe 4050. It's actually 100. So this means a 90% drop in carbon footprint or a 10 x increase in effectiveness now.

So we rest our operations at no fault on three strategic pillars. So first, renewable energy everywhere kind of goes without saying uh for electrification to really reach its full potential, we need to make batteries sustainably and we need to charge them with renewable energy.

Then second, vertical integration, we saw that already in the photo of the site. And when we talk about battery companies, we often start to think about the battery cell manufacturing and also within northolt. That is the most well known part of what we do. And there's more to it.

We talked about this active material. This is the battery chemistry that you design that really determines the performance and longevity of your final product. We also make and design our own packs. These are turnkey systems that uh with connectivity and power electronics and we sell these and install them for energy storage on the grid, but also for industrial applications, for example, uh in mining trucks. And this of course is essential to electrify the mining industry which is part of our value chain. So we care deeply about that as well.

And finally, we develop the processes and build the plants for giga scale recycling. This is important because at the end of life, we have a battery pack with lots of valuable materials, especially metals. And we want to recover this both because they're valuable but also to reduce the pressure of mining for virgin materials. And the neat thing here is that we're talking about metals that are elements in atoms, not long fibers and you can't degrade atoms. That's not how atoms work, right? So you can reuse this over and over again using the manufacturing technology that is modern at that time.

And finally, uh the reason we're here today, the third pillar is digitalization that means software and data. When we start greenfield manufacturing, we can opt to start in the cloud, we can be cloud native. So there's no migration story or challenges with moving workloads. Sometimes we get invited in manufacturing theme that talks to talk about the migration journey and give advice to colleagues and we know nothing we haven't been done in migration. Uh then also on the data side, if we have this uh vertical integration, that's part of our business model. Essentially, we can now open the door to data sharing across the links in the value chain. So we get to double triple and quadruple dip into the investment of the data collection. This also means that we can create more sophisticated simulations and models about our operations.

And uh to be honest, a strategic bet on data and software is the only viable option for us. We are comparatively a new and young and uh a small company in this space. So we have no choice but to act faster and use more software and data. We need to scale with data and not with people within manufacturing. As you might know, this is a manufacturing track and you have experience with within manufacturing, you will run into obstacles when you start changing stuff. And not least if you change the team working uh in your process, or you change the product uh or you change the equipment. For example, if any of these are new, you will run into fun obstacles to deal with. If all three are new. Such in our case, you will have the excitement cubed as we go through our operations, we'll uncover new information. We learn about how to do stuff better every day.

Going from the start into the cloud and running cloud native, focusing on service technologies allows us to deploy early and iterate often this is important because speed for us is is one of the major challenges this has to happen fast. The scale of this industrial bet is immense. Uh you might have seen in the previous slide, the cumulated customer order so far is $55 billion. So there's a lot at stake.

So as we scale our physical and digital capabilities and capacity, uh we need to have flexibility at the core of our expansion model. So it's not an accident that when we look at our architecture and then, and for our digital backbone, we gravitate towards cus and in aws case here, it's we rely very heavily on dynamo db and kiss data streams and lambda, for example, and this is something we can do because we go uh into cloud from day one.

That also means that we can spend less time on non differentiating infrastructure and operations. And the autonomous development teams can focus on what really matters, the application and the business logic.

So now let's dive a bit more into the technical details and see what this really means and what it looks like. But first, we'll have a look at some really wonderful drone footage of the factory.

Nice. Ok. So let's do some tech stuff. Expand into some interesting details. We deal with the manufacturing equipment. That's where we collect our data. It's it's not typically from, from web or from ecommerce. So that means we often interact with some systems or control systems or a plc, for example. So a programmer, logic controller and these devices are not designed to communicate over the internet. They are specialized for controlling mechanical equipment.

So if we start on the left here, uh you have a representation of a system that we talk to a plc in this case. And then some iot gateways and uh these gateways are physical edge computer uh computing devices on the factory floor uh which run our own operating system, which is a linux distribution based on yo.

And then on this device, we run a software, a piece of software that we call a mapper. And this acts as an abstraction layer on the floor, which means that we can reduce the complexity in the cloud because we can uh generalize a little bit already on the factory floor.

So to make it easier in cloud, we communicate with these systems and devices through a series of supported protocols. When it's a plc, it's often o pc u a but it can also be hatp rest or it can be sql or, or something else that's all well and good. But the data is still stuck on the floor. These machines don't run in the cloud right physically on the floor.

Uh so to be able to connect the system with our cloud environment, we need to establish a secure and authenticated connection. So let's do that. Uh we connect and authenticate using iot core for certificates and then we can start streaming data into kinesis data stream as our entry points.

Now, these iot gateways are actually running on k 3 s and that's a lightweight cubin distribution specifically designed for these iot production workloads. And then the those are the worker nodes on the factory floor, the control plane runs in cloud.

Then we release our software as highly available pods that come in pairs for this mapper software. And overall, as we build out the full factory, we'll expect thousands of these gateways. So thousands of nodes and these nodes can be um as a as a concept quite similar, but the hardware can be adapted to what we need as long as they can support this operating system.

So that means sometimes it's a small device, you can think of it as a raspberry pi but an industrial version of that and sometimes it's quite capable machines with on board gp us to run machine learning workload and inference on large volumes of image data. For example,

Now we have basically a fleet of devices uh that we manage that already is an interesting use case and activity and to manage the software to do deployments to the operations to the updates. Uh we need to have some part here that's highly reusable. So we don't start from scratch on every single device, of course, but we also need to support customization and configurability.

So we achieve this with one main repository that shares the core and we add upon that some customization where we can tag and release. So when we uh push this code, we review it, we merge it and then ran a fleet, picks up these tags and deploys the correct helm charts to spin up the pods in the right location.

So that means that the right nodes pick up these pods and uh start communicating with the assigned manufacturing equipment now.

So let's start looking at some data flowing into the cloud environment. We start with a straightforward use case of monitoring, looking at some raw sensor data. The data model here is is quite straightforward. So we have some source, some tag of what this is we have a value, we have a time stamp and this is also the kind of data that comes in in the largest numbers and volumes.

And we have a few use cases uh for this uh kind uh kind of data flow. First, of course, near real time alerts are values out of the expected range. Uh and we can throw up alerts, we can uh support decisions in near real time. We also want to do analytics or uh long longer term investigations, something that happened maybe years ago when we want to look at the long time span where where we retain the high resolution data.

And finally, we want to do a contextual joins. So maybe something happened, there was an event or a long running process and we want to attribute some time series also to what happened there. It can be to a process that ended or a specific product. And especially if we want to, to look at something in in the environment and uh connect it back to the traceability of a product as an example.

Here, we have the mid row that supports the near real time workflow. So a hot path with kinesis lambda res post uh and we can visualize now this visualize an act, for example, using grafana, but it can also be time series analytics tooling uh specifically for industry or manufacturing for the long term use cases and where we need to store data for a long time. And in large volumes and high resolution, we rep partition this data and repack it as parquet files and store on s3.

So this allows us to access it with a theme now or some other query engine. And uh still keep data for a long time at a in a cost effective way. And finally, to do these contextual joints, we we can access the data through redshift and join with the contextual data that we have there.

So this is good, this is already quite powerful still it's it's a read only workflow which limits what we can achieve. So let's uh move on to some more exciting stuff here.

So also running in our cloud environment is a software running the critical operations on the manufacturing line. What does that mean? Well, it does not mean that we run robotics control that is probably not advisable, but it does mean that we take we make critical decisions in the cloud.

So this is the mes functionality and capability that you would expect. So we have, we're in a manufacturing track, we have quite a few um people from manufacturing, i assume here, i'm going to use some concepts uh who's aware of mes systems, familiar er p pm? Oh nice one half the room.

So we'll go through how this fits together. This is important because this now is not an optional retrofit. This is the digital backbone of the factory. This is how we run the factory and you will see how the different systems connect from outside of this core integration framework

Uh and everything comes together in, in that software. So let's start at the bottom, right, with the ERP and PLM. So ERP being the Enterprise Resource Planning that takes care of company assets, like cash and inventory and transactions, uh quite important stuff. Uh this way we have sales orders, production orders and so on the PLM Product Life Cycle Management. That's where we have the product and process definitions and also the source of truth for how we, how we determine quality.

Ok. And this is important that we put this in the center with the MES capabilities in the AWS box you see here because having control of the integration on the factory floor and building ourselves, this box that you see here, that means that we can choose the integrations we do with uh other software uh or system suppliers uh that come as leaf nodes on the system. So we keep uh control over over the critical backbone here.

And let's have a look and consider here how this all fits together in the use case as we move through the operations. So for example, in the ERP we have a production order and we need to execute on it. So it's fed into the AWS environment here and being activated. Now we start producing in the production order, we'll have information about what to produce and how many. And when I say what, that's the, it could be the part number or the model of the product in our case, a battery cell.

So if we follow a cell now going through each process step, it might come into the assembly area and we do a welding operation or x ray operation or something else is going on. And we collect data from these systems in the manufacturing equipment and it's uploaded straight away in the Kinesis data streams. Then because we hold the state in DynamoDB, we get this measurement data, we know what is being produced. We know from which process this data comes. So we can go to PLN and ask what are the control limits? What, what are the quality parameters and tolerances that we need to abide by? And then we can go through the list of all of those and verify that everything is as it should be. And all of this happens soles and animal to be Kinesis and Lambda.

And if something should not go according to plan, not that it wouldn't. But hypothetically that we have a problem somewhere, we will raise a nonconformity alert and this becomes part of the traceability document. Now this is moved to the quality team and they have the mandate to make a decision. Do we accept this for, for some reason uh was the measurement wrong or they make a decision in the end to reject or accept, they do that using one of our applications that we built. There's the top right here a uh the web app for desktop and also for uh mobile device and that communicates with our back end through the GraphQL API.

So now we say that state, we save that decision. And if this product should move into the next uh next phase, let's say it was rejected and it moves into the next process step. Now the next machine will scan in that cell and ask the state in the cloud environment. Is this ok to proceed? Uh and if it gets a no, then you, you uh you post operations. And this question can uh is here represented by the API Gateway uh workflow and capturing all of this uh every interruption, every quality deviation, every non conformity in near real time, making that available to the operations team on the floor and then aggregated for analytic later on means that we can take action quickly and be informed and make part later on as well.

Now let's move into some data stuff. Uh data in itself is a strategic component of Norfolk. It starts with traceability and that is the ability to track to track exactly what has happened to each and every cell throughout each and every manufacturing process. Now, as we gather this data, we also create these high resolution data sets across not only the cell manufacturing but pack manufacturing and our R&D labs and validation equipment to create a unique asset to further optimize operations but also make better products.

Each factory here will have thousands of gateway nodes ingesting hundreds of thousands of messages every second. So to run analytics on this, especially considering we'll have multiple factories, we need a flexible and scalable data platform so that we can do the monitoring. We want the analytics, the data science and machine learning.

If you look at the center row here, we um capture the contextual data we get from uh for example, from the applications but also the raw data from the manufacturing line with or without a human in the loop. This can be immutable data such as events, but it can also be immutable objects like identities or states. So every right to DynamoDB is it suggested in or it's captured by Change Data Capture for DynamoDB Streams.

And um then these hundreds and hundreds of data sets are filtered and joined uh so that we can design these analytics enabling data sets. These data sets are then used by technicians and engineers and managers to prioritize uh initiatives but also to root cause something that has happened. We rely here on a batch ELT paradigm for that central um road which means that we extract into S3 load into Redshift and then transform the data using DBT orchestras orchestrated here using Airflow.

Now, at the bottom row, we have examples of the electrical measurements. So this can be laboratories and validation equipment where we discharge and charge batteries hundreds and hundreds of times to find out how they perform in uh in terms of power. But also longevity, this is really useful for product development and validation and R&D.

And um what we have added um sentences last year. Uh when we talked about this is another Kinesis data stream after the Change Data Capture. And the reason for that is so that we can add the Kinesis Data Analytics or the managed Flink uh offering. That means that we can nominate some really high value metrics that we want to know in near real time but are a bit complex to calculate. In our case, it can be mutable data that updates over time that also needs to be joined across data sets. So we have multiple streaming data sets that we want to join together and update as as new uh um information comes in and then finally not picture here but still really interesting from a data perspective is uh computer vision workloads.

So for example, uh taking images from the manufacturing line doing uh distributed training on multiple cloud GPUs deploying the best performing models to find defects using computer vision on the manufacturing line. And this again is using the same concept with the K3S cluster where we have um nodes with on board GPUs.

Ok. So now we have covered uh a few capabilities and features inside uh one factory where this really becomes extra interesting to the next level is how do we scale this out to multiple factories to do that? We have established a concept called cloud factory modules and it starts off with the physical layout of the site. So we start by looking at what buildings do we have on our site and we create a dedicated AWS account for each building. And we then spin up a cluster for each uh each of one of those buildings with the associated Kinesis data streams for data ingest. This is really nice because uh if you have an incident, it manifests somehow in the physical building, something is wrong there. And that maps directly very transparently to our cloud infrastructure.

In in the core account, we deploy uh a number of services that uh are shared between the different buildings. So this can be, for example, DynamoDB for the transactions and operations, it can be RDS uh databases, it can be EKS for long running processes also in the cloud. Uh it can be OpenSearch or S3 buckets with raw data in the data account. Here's where we do the refining. So we take the raw data from the core account and start filtering mapping and, and uh enhancing, for example, with other reference data to create these data sets specifically designed for analytics. And this is again what we saw before with Redshift and DBT and Airflow, you can think of this as a, as a DAG essentially a directed graph where everything is replayable. So if we need to change the definition or if we need to backfill data from the raw data, once a some logic changes, that's no problem. And in this account, it is also where we deploy all of the workloads and infrastructure that we need for machine learning operations or MLOps.

Ok. So that's all we need. Um that's factory specific. We also need a few things that help us in the entire ecosystem or across factories. We start here with the shared account and uh it offers the functionality for our developers. So for example, here's where we store secrets uh Elastic Container Registry for, for images and um also our Observable platform. So that includes Prometheus and uh OpenSearch and Grafana. We also have the network account and this is really our network backbone. You can think of it as a big cloud router. We have Transit Gateways, uh Direct Connect and firewalls in this account. And this account also controls the incoming outgoing communication and traffic between rlws accounts. This is also where uh we integrate with other cloud providers. So it turns out that the multi cloud is a thing and in this account is where we face that challenge boldly and, and deal with the reality. Uh but let's move on quickly from there.

So, uh finally we have the global account. This is for example, where we have the central GraphQL endpoint to provide a consistent front end and US experience connecting. For example, the mobile application, this traffic is then federated out to the right site uh with a federated backend. Uh and this then opens the door to do access control based on location and role. And that all of those things is what we need for for one factory. And as you can see when we go to the second factory and the nth factory, we basically spin up a new core account, we spin up a new data account. And for each building, we create a new building account. And this is really neat because now we can have one consistent framework, one architectural concept, we can move between sites and feel right at home. But still we have this customization potential and we can configure based on the site. So that's really the the strength of this concept.

Our vision with software defined factories is using this modular framework of high reusability yet local configurability that means we want everything to be defined in code so that it's repeatable, it's consistent, it's portable and it's evolvable. So we're taking the approach of everything is code and applying that to our factories. So that means factory code, everything we touched upon here in terms of data and software really, it is a great idea for a single domain or for a single factory. So that if you have a single factory, you still want to improve your operational excellence, you want to run data driven workloads like predictive maintenance and predictive quality. And our bet is that the truly differentiating story lies beyond the single domain, it's connecting the links in the value chain.

So in our case, it's the active material. This battery chemical that we design and manufacture is the cell manufacturing, the pack manufacturing. It's tracking and following the product during its operational life and then finally doing the recycling i say finally, but that's just the start of going back to the active material manufacturing. So what could be achieved here if we combine the data from all of the links in the battery materials life cycle, for example, taking the detailed cell traceability from the cell manufacturing and combining that with the telemetry from battery packs that are storing energy on the grid. Well, what could we do with the cell performance this charging discharging data from a validation lab and combining that with simulations of molecular or chemical simulations of new chemicals or a new uh process. So time will tell and we cannot wait today.

We are 5000 uh people at Norfolk working tirelessly to ramp up their first factory in in the north of Sweden uh which will be the first home grown European giga factory. It's early days still and we have many more factories to go. Remember, the world needs more than 100 factories of this giga scale. These factories need to be green. They should be software defined and we believe they should run in the cloud two months ago. In the end of September, we also announced that we will be expanding our giga factory portfolio to North America. So we will set up shop in the province of Quebec in Canada, a fully integrated factory with active material manufacturing, cell manufacturing and recycling.

So there's a lot to be excited about. Um I'm really happy that you stayed with me in the entire session here uh to listen to our story. Uh I'm looking forward to speaking with you and have fun and interesting conversations. We'll take uh questions just outside the door here uh later on uh me and the team, so do speak to us and uh work with us as uh as partners or, or new team members.

I want to thank you again for being here. Thank you, Kati uh for uh presenting with me today and I wish you a fantastic day.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值