Improving manufacturing at Panasonic Energy

Welcome to Manufacturing 101. By show of hands, how many of you are just getting started on your smart manufacturing journey? A few. And how many maybe have started their initial use cases, but they're at the point now where they're looking at how they scale those use cases across multiple sites or add additional use cases to their strategy? Hands? Great.

Well, we hear from a number of automotive and manufacturing customers that while oftentimes they have success deploying those first few smart manufacturing use cases, the challenges really come in when you start to think about how you scale those use cases to multiple sites or multiple production lines and how you really accelerate innovation of the additional use cases at scale.

So in today's session, we're gonna talk about first how to put together a smart manufacturing strategy and scale that strategy through a framework we call the industrial data fabric. And then we're gonna hear from examples specifically from Panasonic Energy, North America about how they've had success with their smart manufacturing journey as well.

My name is Joe Rosing. I'm a Go-To-Market Strategist with AWS and I'll be joined today by Justin Herman, the VP and CIO of Panasonic Energy, North America, as well as Shaun Kirby, who's a Principal IoT Solution Architect at AWS.

So the agenda for today, I'm gonna start just going through some fundamentals of how to put together a smart manufacturing strategy and how you can scale that strategy through a framework called an industrial data fabric. And then Justin's gonna come up and talk about the success that they've had at Panasonic around scaling use cases using AI/ML.

Sean will then talk about how to get started and some of the initial use cases as examples that they got started with, both with AWS and our partner Palantir. And then we'll wrap up then with kind of a future vision and where you can go to learn more.

So let's begin talking about how AWS is helping customers simplify digital transformation and really accelerate their smart manufacturing outcomes. Well, sometimes it's easy to get lost in the weeds with smart manufacturing buzzwords and acronyms like IoT, AI, ML. It's important to begin your smart manufacturing strategy with the foundations of the durable needs of how to run an operation and what it takes to be a world class manufacturing facility.

So I think about those durable needs really in three buckets. The first is reducing variation - how do we become predictably boring at key performance indicators like availability, performance, quality, inventory management and really instill that culture of relentless, relentless continuous improvement.

The second durable need is around resilient flexibility - so how do I more quickly respond to changes in customer demand by having a more flexible manufacturing capacity? This means moving fixed cost to variable costs, capex to opex, and driving interoperability of machines and systems so we can deliver that flexibility in the manufacturing operation.

And then the third durable need is speed. Speed wins in manufacturing. And so thinking about how we can better enable our engineers and our frontline operators to make better decisions faster is a key element of that smart manufacturing strategy.

Now, these durable needs maybe provide the foundation, but it's also important to be cognizant of the fact that we need to deliver value to the business. And as operations and supply chain leaders, you know that you're delivering value to your customers by delivering high quality products at lower or at least expected lead times. But you also need to deliver value to your business stakeholders by way of lowering costs, increasing revenue by way of increasing throughput, and improving working capital.

And so to demonstrate some of the financial opportunity, the value that we can drive from smart manufacturing opportunities, I have here a simple Operations 101 or the DuPont model. And I start with just the basic understanding of what is the theoretical capacity of the plant. And here in this example, we're assuming we're gonna run 24/7. And then we can build 420,000 units a year in theory, but we know we don't ship 420,000 units a year, right? Reality comes into play, we actually ship a number much lower than that. And a percentage that we can use to understand what the actual production ability of the plant is, is OEE or overall equipment effectiveness.

And OEE is a metric that's a formula made up of availability, performance and quality. Now OEE is another one of those terms that we hear a lot about in smart manufacturing. It's not in the middle of this slide because it's a magic dashboard that solves all your problems. It's in the middle of the slide because it's actually a financial metric. It's a financial indicator that helps us work backwards into how we prioritize smart manufacturing use cases and how we calculate the value that those smart manufacturing use cases are delivering to the facility.

So to help explain that I can expand this model and look at some assumptions of the price and cost implications. So if I assume a price per unit, a variable cost per unit as well as allocated fixed costs over a given year, I can start to calculate what is the impact the plant has on the profitability of the business.

Now, where this matters from a smart manufacturing standpoint is when we think about how we can change this number. And if I just increase that OEE number by 10 points, even with a 5% increase in fixed costs, I can improve my overall profitability by almost 40%.

And so this really gets at what we oftentimes term as "shadow capacity," which is essentially the additional units that I can get out of the factory without making investments in more machines, more production lines or additional capital expense, right? But it also begs the question, how do I improve OEE by 10 points? And that's really where we start to work backwards into how we prioritize use cases that we can go deliver from a smart manufacturing strategy standpoint.

And when I think about those use cases that I prioritize, I also want to think about how I can then layer in some of those technologies like IoT, AI and ML to accelerate those improvements at greater scale. And that really brings us to the heart of how we think about a smart manufacturing strategy and the use cases that we can go deliver.

Now as we do those use cases, oftentimes we really focus on the y-axis of this chart, the analytics maturity. So for example, we take a use case of machine availability. And if I want to improve machine availability, I might start at a standpoint of what is my real-time visibility - what's the dashboard or visibility that I have into that machine being up, down or idle?

But then I progress up that maturity curve and I think about how can I apply machine learning and predictive analytics to predict when that machine is gonna fail and then prevent that failure by addressing it ahead of that time. I then might think about how I take it even further and apply artificial intelligence as a way to optimize the way that I run that machine.

But that view is very centric to that single machine or even that single production line. And so if I wanna scale that analytic capability across multiple machines, or if I wanna take that ability and scale it across multiple sites, I need to combine my analytics maturity with what we call here data ops maturity or the x-axis. And this gets to the heart of how we strategically think about our data foundations, how we're pulling data together, organizing that data and contextualizing it such that we can really accelerate these analytic use cases.

And if I can begin with the end in mind around that data strategy, that's how I can take any of these use cases and scale them very quickly across multiple production lines and across multiple sites in the enterprise and ultimately achieve that full on digital maturity.

So the way we do that at AWS is through a framework we call the industrial data fabric. And the way that I like to explain this framework is starting at the top, and we think again about those use cases because we always want to be working backwards from where we're gonna deliver the biggest bang for the buck to the business and deliver the best outcomes to our business stakeholders.

Once I identify those use cases, I then need to understand where is the data gonna come from. And that's what I show on the bottom of the framework. So it could be, it's sitting in a historian, it could be pulling data out of your MES or your manufacturing execution system. I need to understand that landscape and then the role that AWS and our partners play is we connect the dots between the top and the bottom and we work backwards from those use cases to apply a portfolio of analytics and machine learning services to provide the insights to deliver that use case.

Any of those analytic services are gonna want to pull data from a secure central data repository. That's what we show in the storage layer. And what becomes key in that storage layer is not just thinking about it as an S3 bucket, but it's storage in terms of what's the most cost efficient way to store the data and how do I effectively contextualize the data so that I can use it for a multitude of use cases.

I also need to get the data into that storage layer and that's the role of our IoT services or our streaming services. And then typically in manufacturing, we're deploying in a hybrid architecture which means we need to account for a number of different edge capabilities to do storage and compute at the edge as well.

So really these core bottom layers of this industrial data fabric are really important to establish that long term data strategy that are gonna allow us to accelerate how we innovate doing additional use cases and scaling our current use cases again across multiple lines or across multiple sites.

And once we have those data foundations in place, that's really then where the wealth of analytics and AI/ML services come to bear from an AWS perspective because then we can think about that use case and defining the problem statement of that use case in a way that we can solve it with data science. And then I can work backwards into what is the most cost efficient way to get the end result of that use case.

It might be putting together a dashboard in Amazon QuickSight. It might be deploying an auto-ML service like Lookout for Vision or Lookout for Equipment. I might need to build a custom model in Amazon SageMaker or I could be considering how I solve that problem with generative AI like Amazon Monitron.

Any of these services though are gonna require that central repository and that contextualized data to do at scale. And so really to take advantage of this portfolio, it's important to put those foundations in place.

Now, in addition to that stack view, we also want to think about how we deploy this. And it's important to think of that manufacturing strategy and deploying the number of use cases again with that end in mind, but also thinking about how we can compound the value of the data assets to improve your return on investment and start to achieve an ROI from these use cases in a matter of days and weeks and not months and years.

And Justin is gonna talk a little bit around how they've been able to do that at Panasonic as well. But the process that we do is we start with that first use case and building the pipeline, for example, an equipment state use case. And then as we move into the second phase, we're going to leverage and augment those data pipelines to go execute an increasing number of use cases. And we can do that because we've put the foundations in place from an asset model and a process model standpoint.

And then finally, the end state that we want to get to is having a proper industrial data fabric in place that really allows us to innovate at scale and accelerate our ability to experiment with new use cases because the data is contextualized and available for our use.

And so this is the process that we at AWS work through both with our professional services teams as well as our partners like Palantir, so to dive a little bit deeper into how this theory comes to life and some clear examples of the results that Panasonic Energy North America has been able to achieve. I'd like to invite Justin to come up to the stage and talk through some of those specific examples of the results they've had through AI/ML use cases at Panasonic Energy, North America.

Right. There's a lot of talk about scaling in the market and how do you do that? Well, we've been there and we have done that. As I mentioned, we started in 2016, in 2017, we shipped our very first cell. Uh but 2018, we had 1 million cells shipped 2019, we hit 1 billion. And as of today, we're over 8 billion sales shipped, right? That is close to 5.5 million consistently being manufactured on a daily basis. That's 1 billion cells every six months, right? Tremendous scale.

So how does that translate from a technology perspective? With that scale? It is part of our responsibility to provide those platforms that creates efficiencies within the system itself. By, by introducing these platforms, we are met with a host of challenges. Those of you who are in the manufacturing industry can probably relate to this. How often have you gone to your general manager and said, I've got this great idea. I'd like to implement it. One of the first things they'll tell you is, hey, this is working, do not touch it, right? I think a lot of us can relate to that in the manufacturing industry.

There's also a lot of tribal process knowledge accumulated throughout the years. And one of the opportunities we have working in technology is we typically work with small team sizes. So we have to leverage our partners like AWS and PAL to help us overcome those small team sizes. And then generally, there's a low adoption rate of AI within manufacturing itself. Um partly because our business does not understand what that is. Uh partly because we have not done a good enough job of explaining how AI can help in the manufacturing process itself. And then from a data perspective, we have low data availability.

Now the key word there is availability in penner, we produce over a petabyte of data per day. But we we are only able to leverage less than 5% of that, right? So the availability of data is key. We also have many silo systems, we have legacy applications and we have non scalable architecture. So the question we have to ask ourselves is how can we develop a platform that can help accelerate innovation at speed? And this is where partners like AWS come into play as well as our PALANT partners to help us create a logically centralized highly scalable platform that can take data from multiple sources sources through a um excuse me through a common ingestion framework, right? And provide contextualized data to our end users that allows them to make real time decisions.

This is the industrial data uh framework uh fabric that uh Joe was talking about, right? So as we started our smart factory journey, we had three key aspects that we want to achieve. First one is B to value, right? I'll talk a little bit more about this in a second second, of course, safety and quality and security and the third one performance.

So I want to talk a little bit about speed to value. One of my favorite quotes is actually from Klaus Schwab. And he mentions or, and I'm gonna paraphrase this a little bit. He says that it is no longer the big fish that eats the small fish, right? That's the fast fish that eats the slow fish. And that is so true in the world that we live in today, particularly from a technology perspective. We have to do things at speed and we got to show to our business how we are producing value for them. The days of having a six month POC with a 12 month pro program that is delivered at the end are well past us. Our business works in minutes and seconds. We have to deliver ROI in days and weeks as opposed to months and years.

So what you're seeing on the screen now is kind of our timeline at the start of our smart factory journey. When we started, we had a 12 month program too long, right? Cut that down to six months with three use cases. Well, that's not enough so let's increase the use cases to six use cases. But let's deliver the first use case with ROI within one month. And that is exactly what the team was able to do. The team was able to deliver a fully functional use case, delivering ROI within the first month of starting a smart factory journey leveraging our partners from AWS, as well as our partners from PENT.

We started off um with our use cases and all of them are connected building off of one another right at scale. And sometimes you gotta be careful what you ask for. Because after these six use cases, our business realized the value and we have over 22 use cases in the pipeline right now, many of them that are being delivered actually one of them today.

So I'd like to uh just talk a little bit about some of these use cases. The first one I want to talk about is our edge use case. This particular use case really took what um classical computer vision does, which is sometimes over judged material defects, creating waste. And if there's two aspects within manufacturing that you can address to incr increase the bottom line, one is increased throughput and the second is reduced waste, right? This particular use case focused on the second aspect of that which is reduced waste.

So as I mentioned, the classical computer vision um was really over judging the material and creating false positives. So the team took a look at this and asked themselves, how can we utilize AI as a secondary judgment factor to help reduce waste? And because of our processes, and this is one of the challenges we had because of our processes. And depending on whether it's a high speed line or just a regular line, we only had between 408 100 milliseconds between that judgment and the and no good um uh material being being put aside, right? That's a very small amount of time.

So what we had to create was a AI in line process judgment that we could put in our processes to reduce that waste and working with AWS and TALENT. We were able to not only reduce our waste but reduce it by 10 to 15%. So when you are producing 5.5 million cells per day, that 10 to 15% reduction in waste has a significant ROI to our business.

The second use case that I want to uh talk to is the maintenance assist use case. And I'm very proud of the team because this one, I'm very excited about. This is a true generative AI use case that was actually deployed today. And uh one of my colleagues is sitting in the audience there today and he's nodding saying yes, it was successful. So that's that's great news um that this particular use case is really looking at how can we use generative AI to take the time that it it um we use to train new employees which can be up to nine months and reduce that timeline so that they can operate in an effective manner within just a few months.

In addition, how do we leverage large language models to ingest hundreds of thousands of reactive tickets? Right. And put the power at the fingertips of our operators bringing in data from multiple sources, including our ERP MES skater our standard operating procedures, our work instructions including audio and visual data as well. This was a tremendous task. And the industrial data fabric provided by AWS allowed us to take these large amounts of data, contextualize them and put them in a format that these large language models could then put at the fingertips of our end users.

And as I mentioned today, we just went live with this and um we have taken time of 35 to 40 minutes that our operators would take searching for data and reduce that significantly. And as I mentioned at the beginning, every minute, every second counts from an ROI perspective. So no more trying to figure out what the previous shift did. No more. Trying to figure out, hey, I gotta go find that shift supervisor to make a decision. All of those data points are at the fingertips of our operators who can make real time decisions.

So these are just two of the use cases that we have uh deployed from a smart factory perspective, we have multiple others and it has been a sig it has been a great journey working with AWS and PAER. Um I am gonna bring to the stage Shan who will be able to share with you in a little bit more detail, some of those use cases and then I'll come back to wrap up at the end. Got you.

Well, thanks Justin. We're going to take a look now a close look at how we implemented a couple of the use cases. The first is the computer vision use case that Justin introduced. And the second is a use case around predictive maintenance. So you've heard about the industrial data fabric and seen a little bit about the architecture behind it. And this is a comprehensive collection of technologies and architecture patterns that have been proven to solve a wide variety of industrial use cases. But implementing it all at once can be really daunting.

So the approach we took was to think big but start small and then move very quickly. And this allowed us to both aim very high for the long run, but also to achieve immediate business value in a quick period of time. So we started out with an align phase and in a couple of weeks, we dove deep into the current uh business and technology challenges. We understood a number of different use cases and the value drivers for how we could bring business value. We then prioritize those use cases. And in about a week on site, we dove into the current technology landscape to understand what are the systems and the technologies that we needed to integrate with. Where can we get the data that we need to perform these industry leading analytics and really take operations to the next level.

We then went into a launch phase. And with a number of these use cases, we implemented uh initial solutions and very quickly were able to establish business value. And this implemented a large portion of the industrial data fabric in terms of reusable components that Panasonic Energy can now use, not only for additional use cases but also to scale the platform to additional sites around the globe.

So let's take a look at a couple of these use cases. The particulars of the business value we set out to achieve the target architecture that we created guided by the industrial data fabric reference architecture and how we implemented that and then the business results achieved.

So the first is around computer vision and the application is quality assurance. Now, as you probably know, Panasonic Energy has a very high bar for quality in battery manufacturing. Basically there can be zero escapes or no defective batteries. And for this reason, the current quality assurance system sets a pretty high bar and it was actually identifying some false positives. So rejecting some batteries that were actually perfectly good and we realized if we could reduce that false positive rate by a little bit, we would be able to improve the yield and the overall efficiency of our manufacturing process.

So that's what we set out to do. But we also had to maintain that very high bar of zero escapes. And as Justin was describing the approach we took was to implement a solution in line with the current computer vision based solution. And the new solution would lead use industry leading deep neural network models to be able to identify defects and to classify them very accurately.

But as Justin mentioned as well for the high speed battery manufacturing lines, the big challenge for this was achieving extremely low latency with our algorithms. And in addition, in order to achieve that zero escapes, we also had more than just the deep neural network. We had some traditional computer vision baked in as well where we looked at certain characteristics of the batteries. And if they were within spec, then that helped us to validate the deep neural network model altogether, the solution workflow, as you can see here on the right consists of several components we took into account the judgment or the evaluation of the battery from the current computer vision system.

And then we ran it through infer through a deep neural network and then we ran it through another model as well of computer vision to look at the different specs. And if the two models, the deep neural network and the computer vision spec analysis agreed, then we had confidence that that judgment was correct. And only then if we differed from the original computer vision system, would we overrule a judgment of a defective battery? Because it was actually a good battery.

All of this, we had to gather the input video imagery, the judgment from the existing system, all that data and then run it through these algorithmic steps and then send it back to the controller in order to not kick out a battery that was perfectly good. So the architecture that we used for this is shown here on the left and it consists of parts both at the edge and in the cloud. And this continuum of technologies being able to have things on site where you can get really low latency for these ultra fast applications was critical to us.

And for that, we used the Puneir Foundry SIP platform, which is a great way of developing ultra low latency applications at the edge. We trained a computer vision model, the deep neural network using Amazon SageMaker in the cloud. And we were then able to deploy that to the Puneir Foundry SIP system at the edge. And that system was also sending information continuously back to the cloud so that SageMaker could monitor that for things like model drift and correct to keep our model performing at the highest level.

And not only that but throughout production as well, we'll be able to continuously get more and more data back from the solution. So we can continually evolve and perfect that model over time. So this is a really great combination between edge technologies like the Palante Foundry SIP and SageMaker in the cloud. And it gave us some great results.

We started out by training the the deep neural network model on about 15,000 images and we used 17 different classes. Five of them were good, 12 of them were defective classes and we were able to achieve a 98% accuracy on the defect rate. This led overall to up to a 15% reduction in false positives and that might seem like a modest number. But at the scale of manufacturing for Panasonic Energy, this is going to result in a savings of over a million batteries over time. So a nice result.

Now the second use case, we'll go into a little bit more depth on here is a predictive maintenance use case and the application here is electrolyte filling for batteries. This is a critical step in the manufacturing, but it also involves a lot of moving parts and the equipment used to do this requires a lot of maintenance and it can often be hard to predict when an issue might occur.

So we set out to see could we actually predict when the equipment might need some maintenance and be able to do just a little bit of preventive maintenance ahead of time and avoid the more extensive repairs necessary that could cause things like production outages or downtime. So the benefits for this of course would include a reduction in unplanned outages and increase in OEE and some of the other KPIs that Joe was talking about. But it also had another interesting benefit as well. And this is in terms of employee experience because the preventive maintenance is much faster and simpler than fixing some of these issues which are more arduous and time consuming processes in the long run.

So it really has a number of of benefits that will go straight to the bottom line. So how did we do this? How did we get a hold of some of the data that had been elusive to date? And how did we create a really industry leading model for this kind of prediction because this really hadn't been done before?

Well, it turns out that a threshold based predictor algorithm based on some time series that are aggregated at different levels in a hierarchy proved to be a really great solution here. Some of the time series data, for example, involve telemetry of things like the weight of batteries at certain points in time, the amount of electrolyte that's being dispensed at this step in the manufacturing process or just telemetry from the machines themselves like spindle performance factors and things.

Most of that data we could get from the manufacturing execution system or MES. And we fed that again into Piner Foundry this time in the cloud and Foundry aggregated those time series on several different levels all the way down at the individual device or component level within the equipment to at the equipment level itself, consisting of multiple devices and then all the way up to the line level.

So we had these, these beautiful aggregated time series and we were able to tune thresholds that would tell us if the time series wandered outside of those thresholds significantly enough, then we were pretty likely to be about to encounter an incident with that equipment. And we were able to match this to incident logs where operators had taken note of performance and past issues with this type of equipment and the results were pretty impressive here as well.

We with tuning those threshold parameters, we were able to achieve a 95% accuracy, meaning we, we could have predicted 95% of those past historical issues at least two hours in advance and sometimes as much as six hours in advance. And this was enough time to really move the needle so that we could do something about this and dramatically reduce unplanned downtime.

So these are just a couple of examples of these use cases and how we've been able to implement those using the architecture patterns of the industrial data fabric. But I know Justin has a much greater vision for the future and some very exciting things. So I will hand back over to him to help us see around the next corner.

Yeah, thanks. I appreciate that. Um so one thing we are very proud of, proud of at Panasonic Energy is our new De Soto Kansas facility which is currently being built. And when we look at our journey is smart factory to date, we rarely talk about the connected Giga factory. Now we call it connected Giga factory. You can call it connected factory, connected warehousing, whatever you might want to call that. But our connected Giga factory and our vision for this is how do we leverage the platforms that have been deployed in Nevada, bring that over to Kansas and others that might come up in the future and reduce the cost that it takes, that the cost of implementation by scaling this up, right?

And I'll give you two quick examples of that during the process of of building this De Soto Kansas facility. We've already deployed two applications based off of the models introduced in Nevada even though we're not actually producing anything yet. And we've, we've deployed these in a finance area as well as construction, managing, managing the schedule, right. So we are already seeing returns on our initial investment of smart factory in Nevada at Kansas and this is what we're trying to do. Even at larger scale.

When we think about Kansas, we think about any future Giga factories that might come up. We think about our facilities in Japan. We've also started to engage Palent to now as well. So that is the true vision of that connected Giga factory. Each use case that we deploy has a higher ROI attached to it because we are leveraging the LMS. We are leveraging the generative AI and we're leveraging the industrial data fabric that we have implemented in Nevada that has been implemented, that has been provided by AWS.

So in closing, what I would say to all of you sitting here who's potentially thinking about this journey ahead and it can be quite daunting at times is take a step back, understand what your business is looking for. Where are those areas that you can show real speed to value, bring in partners that are true partners who are not looking to monopolize your smart factory. You heard Sean and you heard Joe talk about Palant here, right? You have a multitude of vendors that you can choose from. Find those vendors who will work collaboratively with you to create that industrial data fabric, create those use cases and be a champion for you moving forward.

We found that in AWS, we found that in Pinta and that's our vision moving forward in the future. Thank you very much for your time. Appreciate it.

All right. Thank you, Justin. And, and thank you for sharing your story and all the results at Panasonic Energy, North America, truly impressive and, and great to see that you're taking the learnings from Nevada and now applying them to Kansas as well. So thank you for joining us today.

Before we wrap up, we just wanna highlight that, especially since it's Monday of Re:Invent, there's many opportunities this week to learn more about how AWS and our partners can help you with your manufacturing and industrial use cases.

So one specific way is to make sure that you visit the manufacturing and industrial demo experience in the main expo. So that's in the Venetian. And you can see the times here on, on the right side at the expo, our technical experts will be on hand to answer any questions. There's also 10 different demos that are showcasing capabilities from smart manufacturing to AIML to generative AI product engineering supply chain, sustainability, a number of different use cases in the industrial space.

The industrial demo experience is located in the industry zone. So when you go to the expo, the the industry zone is towards the back of the main expo and we definitely look forward to seeing you there either this evening or at some time later in the week.

The other thing too, as we wrap up, we just ask that you go into the app and leave feedback on today's session, of how we can improve these sessions going forward. But we wanna thank you for your time today on behalf of Justin Sean and myself. Thank you for joining us.

We will be available here for any questions, either up front or in the hallway after the session today. But thank you for your time and enjoy the rest of your Re:Invent this week.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值