Build Your Digital Infrastructure. Register for the ELEV8 2024 Event Today.
By Clarise Rautenbach
25 November 2024

Digital Infrastructure: Information Management with Flow

Share

Introduction

What’s the difference between Data Management and Information Management – And why should you care? Gary shares how Flow eliminates silos, scales your analytics, and securely delivers actionable information.

SPEAKERS:

Gary Lowenstein
Customer Success Lead
Element8

Transcript

00:10
Gary
Good afternoon everybody. For those of you who don’t know me, my name is Gary Lowenstein. I head up the customer success team at element8 and thanks everybody for taking the time out to spend with us on it’s Friday afternoon and Friday afternoon, most of the system integrators should be drinking beer by now and customers surely shouldn’t work on a Friday. So thank you. Thank you all for taking the time to spend with us. So this morning in Ricker’s secession, he told us that every time there’s a new information project, somebody comes along and says, I need more information. They create a silo of information. They go along and they pick a tool where they want to present the data. They work out where the data resides. Then they probably copy that data into some kind of a data mart or something.

01:11
Gary
It might be six months worth of data, it might be a year’s worth of data. They then go through all those steps of ingesting the data, collecting, ingesting, cleansing, validating, and they do it over and over again. Then there’s the concept about a UNS unified namespace, one of our famous TLA’s and people say, well, why can’t I use it? UNS, what’s the problem with UNS and UN is fantastic. The problem from analytics perspective is that a UNS focuses on real time data. And as everybody Laura hinted to it, and Carl did as well, is that when you want to do analytics, if you want to feed something for machine learning, you need a hell of a lot of data to do it. That’s generally our historical data. And we need other data around it. We need to contextualise that data.

02:08
Gary
That’s where the Unified Analytics Framework comes in. The Unified Analytics Framework gives us a place where we can centralise all of the development work. It gives us governance around things like units of measure. Simple thing like a unit of measure. You go to one plant, in the bottling plant they’ll measure bottles per hour. And in the next plant you go to, it’s the same company, they measure cases per hour, cases per shift. How do you get all of that into the right kind of context? You need to understand the events that are taking place around it. Was I running the machine at the correct speed? Was I running it over design specifications? All of those things need to be provided. And that Unified Analytics Framework gives us a central place where our subject matter experts can collaborate so much like the Unified Namespace.

03:06
Gary
With the Unified Analytics Framework, we connect the UAF to everything and everything to the UAF so it works pretty well. We don’t do the same thing over and over again. We do it once. If we change one component, you change one part of your framework. You don’t go and change all of those ones that have already been done. And this is where flow comes in. So what we see in a lot of companies is that a lot of them don’t have that slide is a bit weird. They don’t have a strategy for information management. They really don’t have a strategy. So you need to understand two things. First of all, what is data management? Data management is all about collecting that data, making sure that you can collect it, making sure that you can store it, and making sure that it is available.

04:06
Gary
Whereas information management is really about transforming that data that we’ve collected into actionable information so that you can make informed decisions with that information. So flow does this all for us in three really simple steps. The first thing that we’ve got to look at is we’ve got to look at information modelling. An information model defines the key data in the organization. It focuses on establishing governance around that data and how it’s created, how it’s managed and how it’s used. And that model contains all the rules for how you transform that data into actionable information. So what are the rules around units of measure, states of machines, states of pieces of equipment that’s all around the information model? The intelligent engines are built for manufacturing environments. They do all of those calculations and data processing and real time notifications.

05:20
Gary
The cool thing about flow is it doesn’t replicate any data. All it does is it uses that data to create the analytics that we need to create those KPIs that we need. And it stores those KPIs, but it always has access to the underlying data that was used in creating those KPIs. So it’s a central information hub that offers comprehensive dashboards and reports and access to all of the information that we need. So if I put up two data points there, anybody want to tell me what they mean? Nothing, thank you very much. Absolutely nothing. So if I go along and I add a little bit of context, and ideally I should inherit as much context as possible at the edge.

06:20
Gary
But if I add a little bit of context to it, I can see, well, one is probably a totalizer value from a tag name and one from a tag path, I can understand, well, it’s probably from a power meter, so it’s indicating that to me. So give it a little bit of context, sort of start getting some kind of an indication what it’s all about. But I need more than that because that’s real time. I need more than that real time data. So I go and start and I say, okay, cool, I’m going to put in a process historian just by putting in a process historian. And immediately I can see something like, what are my production totals for the day? What is my kilowatt usage and instantaneous time? Now, what about where I don’t have data historicized?

07:10
Gary
So this morning we got the little hint about timebase. So if I don’t have sufficient tags in my story and what’s something that I don’t want to necessarily historize, I can use even without timebase, Flow has the capability to, on a short term basis store historical data for me, 30 days worth of data. But out of those two data points I can straightaway, I can get two KPIs. I think I could this morning say that out of a couple of data points we’re going to look at how many KPIs we can get. So once I additional history to it Flow, I could look at that data. I can see. Well, there’s a bit of an outlier then it may be data that I don’t necessarily want to include in my calculations.

07:57
Gary
We all know you get a spark on a piece of equipment, something’s rubbish. So what Flow can do for me is it can provide the data cleansing for me. The flow calculation goes and it executes and I can scrub out all of my bad data. Now we go and add time intervals and time is a very different things to different. Different thing to different people. What is a day? 24 hours. Yeah, but what is it? How do you define that day? Midnight to midnight, 6am to 6pm, 7am to 7am? How do you define that day? What is. You know, so flow allows me to allocate everything into time buckets. So I can go and do some aggregation and it’s not showing very nicely, but there’s some bars in there.

08:53
Gary
And I can do aggregation based on hour, day, week, shift to date, month to date, day to date. I can do it, go and do all of those calculations. What I’ve just shown here is that I can get another two calculations out of it. I can get my hourly energy usage and I can go and do my hourly production in bottles. Cool. And I could probably do quite a bit of that just using a standard historian. Yeah, but what about picking up and detecting some events and data states? So I might pick up because in this State here I didn’t have any production. I might know that’s a cleansing cycle. So I know I don’t produce for more than a period of time. It’s a data cleansing cycle.

09:40
Gary
And from that I can now go and establish, okay, what was my energy usage per cleaning cycle? I got a bit ahead of myself there. So that’s another KPI. What did I use in that cleaning cycle? But I want to go and add more context to it. So if I have data from a completely different system, a batch system, my batch data doesn’t live in my historian. So I might want to go and understand what are the batches. I can get that from my batching system or from wherever it may lie. And based on that I can now go and say, well, how many bottles did I produce per batch and what was my energy usage per batch? So I want to understand product A versus product B, what were my energy usage, what was my total production, etc. Etc.

10:37
Gary
It’s my engine at the top there. The PC is obviously a bit slow. I might also want to understand another KPI, which is how long was that cleaning duration? Then I can also understand from this; I can detect downtime. I can go and understand if I’ve got certain machine states; I can go and understand my downtime. And very important is the ability to capture data manually because very often, a downtime event occurs, and who has the best knowledge about that downtime? Probably the operator. So giving the operator the ability to fill in downtime reasons, downtime definitions, all of those kinds of things, I can then go and I can say, well, what was the causes of downtime? I can do burritos around the downtime causes, duration in minutes, frequencies, all of those kinds of things. I can go and do added analysis around them.

11:38
Gary
Then I can go and do some data latching so I can understand what values happened at a specific point in time. And once I can do that, I don’t only have that, but I can also go and understand what was my rate of change. So how did I perform through the day? And then I could also go and slice it by shift and say how much did I produce in shift A? How much did I produce in shift B and where am I progressing in shift C? So I can get my shift reproduction in bottles. I can do all sorts of things around and I can do more things like my run rate. There’s my run rate bottles in there. The next thing I want to understand is based on my current production, where am I likely to end up?

12:37
Gary
So we can do some simple linear regressions and we can do. I understand. We can say based, as I said, based on the current run rate, where am I planning to get to? So I can do end of day totals, end of shift totals, end of month totals, all of that kind of stuff. I can also then go and connect to my ERP solution, compare my daily plan, and understand how close am I to getting my actuals as opposed to my plan. I think out of that, out of two data points, we got 1, 2, 3, 4, 5, 10, 15, 16, 17, 18. KPI’s quite impressive. Yeah. What I didn’t want to show you there was when we do the downtime, we could always add a rand value to that.

13:25
Gary
Many years ago, when I was doing real work for a system integrator, I did a downtown project for a client. And what we did was we actually showed the operator the RAND value of what they were doing. And it just made them think. It puts it into a different context. You know, when one of the mines I was at, when they said, oh, an hour’s downtime cost 3 million rand, you’re like, really? I didn’t think along those terms. So giving that kind of information to the operator is always very valuable. Now, over the last 40 years, you can see I’ve all been around for that period of time. We focused on two types of analytics. We focused on descriptive. We wanted to know what happened to what we used historians for. We said, I want to go back and analyse my trend. What happened?

14:18
Gary
And at some point in time we wanted to understand why did it happen? So what were all these things that happened? But going into The Future, Industry 4.0 and all these fancy things that we want to do, we need to look at two other points. We need to look at two other ways of analysing our data. We need to understand, when will it happen again? That’s very important so we can understand when will it happen again? And very important is what can I do to stop it from happening again? Both of these require massive amounts of data. And you have to cleanse that data. You’ve got to normalise it, get it into a standard format, and contextualise it before you can use it. Once again, thank you all very much for your time. I really do appreciate it.

15:12
Gary
I hope that you enjoy the rest of the conference and that you’ll stay with us for a drink afterwards. Has anybody got any questions? We do have a bit of time. Nothing. Cool, thank you.

You might also like