Search
Close this search box.
Get an exclusive look at Ignition's latest demos, remote energy management and more.
By Elian Zimmermann
07 May 2020

The New Flow 5.3

Share

Watch now or bookmark this page.

Transcript

00:04
Lenny
All right, I think let’s get the show on the road. Well, first of all, I want to say good afternoon, everybody. Thank you so much for joining us. We’re going to have another one of our virtual learning sessions here today, co hosted with Flow Software. And my name, obviously, is Lenny, and I’ll be your co host for today with my team that I’ve got on the call here that will introduce use shortly. And we’re really grateful for you to get together with us on this Wednesday afternoon here, especially during these extraordinary and challenging times that we’re currently sitting in. And we really want to make these webinar series as informative and as valuable to you all.

00:42
Lenny
For those of you that we have not met yet in person, we are very looking forward to meet you after this lockdown is done so we can actually meet you guys face to face. We’re also very eager to learn about what your current needs are and how we can address and solve those needs of you. So on my side from the element eight team, I want to say thank you to Clarice, our marketing and brand manager for setting up this webinar. Clarice, as always, thank you very much. Myself, Lenny Smith, is the customer success and support manager, and we’re very privileged also to have the md of flow software on the call with us, Graham Welton, as part as the panel today. And we’re very excited to introduce to you the flow information platform. They just released a new version.

01:28
Lenny
And myself and Graham will tag team a bit through the webinar here to deliver some of the content to you guys. We will have A-Q-A session at the end of the webinar as well. But please feel free during any point in the webinar to start adding your comments and your questions via the Q and a option, and then we’ll address them as we’ve done with the webinar. But just before we get started with Flow, I just want to take a little bit of an introduction into element eight themselves. Right? So I can be proud to say that we are from today, the authorized distributors of the ignition scan platform, the canary historian, as well as the flow information platform. And we honestly believe that these are a best of read technology stack that offers a nonsense, bespoke and unlimited licensing model.

02:19
Lenny
It is truly cost effective and flexible, and it gives you the solution that you need without complexity. It’s backed, obviously, by our responsive, friendly, and accountable tech support staff. Now, we’re a big believer in the power of the community, and our focus is definitely on our channel partner relationship that we have with our channel partners. We work very hard with them to build and maintain their trusted relationship. And we also work hard to protect our partners and their customers relationship in that respect. Now, you also see the little tagline that we have at the bottom here where we say that we at element eight believe that we help redefine human potential as a component of automation. Now, each and every one of us on the call today is on a journey. Now currently this journey is a little bit disruptive. It’s being tested.

03:13
Lenny
Our operational resilience is being tested, especially with the situation of the Covid-19 virus epidemic that we currently shown with us now that what history has showed us, that normally in a go global crisis like this, it’s a best part or a best opportunity for innovation. Some of the most innovative companies and technologies were born during these challenging times. And in a time of uncertainty and volatility in demand, digital technology can certainly make sense of a multiple of data quickly and optimally. And I really hope what you’re going to show you here today, where the flow information platform is going to just strengthen that component. The small part that we are playing here as element eight to our community is to provide innovative technologies designed to propel your business and livelihoods to the new heights.

04:03
Lenny
We are definitely open for business and my team and everybody on the call here is ready to help you guys. And we really hope to have you on board to currently make this world a little bit of a better place. So please reach out to us anytime and let us know how we can help you. And with that, thank you. Graham, over to you for introduction into flow.

04:28
Graham
Good afternoon, everybody. Well, it’s afternoon, South Africa. I see we have a couple of internationals from around the world. Welcome to everybody. Thank you very much for spending some time with us. We really appreciate it. And I think just before I start, I’d like to obviously thank element eight for the opportunity. As some of you will already know, we’ve recently moved our african distributorship across to element eight. We’re delighted to be part of the element eight stack and truly believe that flow forms an integral component to the industrial information architecture that element eight offer. Right, so I guess before Lenny actually jumps in and starts giving us a demo of what flow is, I want to just take you through a couple of slides just to position flow.

05:34
Graham
We probably have a couple of people on the call that don’t know much about flow, probably got people that know a lot about flow, so just bear with me. For those that already know about flow. I’m just going to go through a couple of slides just to introduce the concepts of what flow is and how it fits into the bigger information management picture. So at the moment, flow has. You guys may know we have an office in Austin, Texas, in USA, and we have an office here in Johannesburg. At the moment, our development team actually sits in Johannesburg, but the Austin office is really a sales office for our global sales. So what is flow? Well, very simply, Flow is an information consolidation platform. But not only a consolidation platform.

06:38
Graham
We also like to talk about Flow as being a tool that allows other platforms to collaborate together. Now, what does this actually mean? Well, what we’re trying to do here is we’re trying to gather data, to collate data from multiple data sources, and to bring it into a single platform, which we like to call the single version of the truth, where you can go, you can access information in various ways, but that data has been aggregated, it’s been calculated, it’s been versioned, and it is now available to you in this repository called flow, as a single version of the truth with an order trail on every single data point. And why do we need to do this?

07:37
Graham
Well, ultimately, what we’re trying to do is we’re trying to sell decision support, and we want to support decision making processes in manufacturing, in mining, in production, from the shop floor all the way up to the executives in the boardroom. And we certainly believe that flow is a tool that can do this. Lenny, if you can just click one. Yeah, thank you. So I put this on the slide here just to give you an idea of the types of customers that flow has, the level of maturity of the customers that flow has. Now, these are customers that are in the information game. They understand that they need to manage their data and transform it into decision support information. This is key. One of the reasons I put this on here is to show you that this tool flow is a very diverse tool.

08:45
Graham
It’s quite generic in that it can span across multiple industries. If you have a look at that on the left hand side, you’ll see very much a focus in the food and beverage industry. But then as we move across to the right hand side, we start seeing other industries. Pharmaceutical, energy management, we’re doing some stuff in solar, in gas, for example, and then all the way across to the right hand side where we have our mining customers. Cool. So just very quickly, most people, when they think about flow, when they think about an information platform, they immediately think about dashboarding and they think about reporting. Now, this is obviously one component of the flow information system. This is the more sexy side of it. And obviously we have to show this from a marketing perspective, from a dashboarding point of view.

09:54
Graham
Out the box, you get the ability to build your dashboards. You get the ability to theme your dashboard. So you can use your corporate colors in your dashboards. You can present these on big screens. It’s pure HTML five. The panels on the dashboards auto refresh. So you can use them in production halls, you can use them in meetings, you can use them on your laptop, you can use them on your devices. You can use these dashboards wherever you are in collaboration with your colleagues to support your decision making processes. I put this one in here just to show you the top half, just to show you the timeline.

10:48
Lenny
Chart.

10:49
Graham
We call it a chart which splits up your time for a specific machine. In this case, it’s a cooler fan. And this gives you so much information. You can see here when a certain event for this machine started and stopped, you can see when it was in production, you can see when it was down, not running. And further to this, you can also see certain attributed information. So we can attribute events that are being recorded from your systems from multiple data sources and be able to present this information graphically like this. But further to this, remember that the dashboarding is not the only component of flow. Further to this, we can take this event information, this attributed event information, and we can create KPIs as secondary aggregations to this information. So very powerful concept.

11:52
Graham
It’s not easy to do in standard SQL scripting, but in flow, it’s a drag and drop operation. All right, so what I wanted to take you through very quickly is a simple discussion around the actual architecture of flow and how flow fits together. So the first thing that we do is we install flow as a system. We call it a system, and what that does, that creates a repository. It uses SQL in its backend, and SQL stores the configuration as well as stores the data and the information that flow records and presents. Once you’ve got your system installed, which takes a couple of seconds, the first thing is to start configuring your collection mechanisms. Okay, so this is step one of flow. Now, it’s important to understand that flow can connect to multiple different sources in one system.

13:07
Graham
So you might have a number of SQL databases. Maybe you’ve got laboratory information management systems, maybe you’ve got waybridge systems, maybe you’ve got MES database that you want to connect in. Maybe you’ve got an ERP type of database that’s got your plan or target or metadata information that you want to pull into flow, you typically would have a historian. So in terms of the element eight stack, we have the ignition historian, we have the canary historian. These historians seamlessly integrate into flow. Again, it’s a drag and drop environment to make these connections and start building up your measures. And KPIs, you can also use flow to connect to web services in the cloud or even to IoT platforms. And then the last one in that list there is manual entry. And I think this is an incredibly important component of flow.

14:14
Graham
When flow was first conceived and designed, we built it with the ability to enter data manually into flow. And this takes the form of a pseudo Excel spreadsheet concept, all within a web based environment. And this entry of data manually is so important for any kind of reporting and dashboarding system. Without it, you’re often going to find yourself stuck or users not willing to use it because they just can’t interact with the data. They can’t retroactively change the data, et cetera. Okay, so that bottom layer is the collection of our data. The second step is when we collect it in. So as it’s going into the flow information system, what we’re doing there is we’re actually aggregating the data. We are pre aggregating. We are calculating it. We are storing it as a very aggregated form, consolidated and contextualized.

15:28
Graham
The third step is the very powerful flow calculation engine. Sometimes we refer to it as our KPI calculation engine. What that does is it takes data out of flow. It performs various calculations and pushes results back into flow. And this is incredibly powerful because imagine this, you could take components of a calculation from SQL databases and mix it in with components from a historian or a cloud, or an Iot, or even manual. So you can bridge your calculations across your multiple data sources that are feeding the flow system. And this is incredibly powerful things like bringing in your plan from an ERP system and bringing in your actual from a historian and performing a reliability type calculation against those two very powerful concepts. Okay, then we have the visualization component. This is out the box, HTML five. You’ve seen some screenshots of this.

16:36
Graham
Lenny’s going to show you some of this in the demo as well. I won’t say much more about that. We have what we call our notification system, and this is a great way to trigger the sending of information into various other tools. So either into email, into Slack, into SMS, into the flow mobile app, which comes in an iOS and android flavor, so you can automate the sending of data based on a timeframe or an event occurring information will then be sent into those platforms. And if you’re using the Flow mobile app, one of the great features of that is that the people in your organization that have access to those message channels can actually collaborate with each other within those channels right there on the app.

17:32
Graham
And what that does is it brings the people or the human concept into your information and decision making.

17:45
Lenny
Okay.

17:45
Graham
And then, number six, our last layer to the flow platform is the ability to integrate data out of flow. So flow is by no means a closed system. We’ve gone to great lengths to try and make it open to allow you to integrate that information that you are gathering, that you are calculating, that you’re manually inserting to be able to push that data out into other systems, other SQL databases, bi tools, MQTT, real time systems, to go back to your ScaDa, for example, or back into a historian. And the last one there is the flow tier two systems. So just a note on the tiering of flow. We have customers that have a tier one at their production sites.

18:39
Graham
So at each of their production sites, and then they will have a tier two at their head office level, which pulls and aggregates information from every one of those sites. And this is fantastic for benchmarking, for logistics, planning, et cetera, across a whole fleet of production sites. Okay. And I think with that, I’m going to hand back to Lenny. So, Lenny, I’m going to stop my screen share, and hopefully you’re back online, and we’ll get you up and running with your demo.

19:22
Lenny
Perfect. Thanks, graham. So, yeah, I’m back online. Sorry about that, guys. I had a bit of a glitch my Internet connection. I probably shouldn’t have put in the Tesla give us a joke for live demos because it seems like it’s bitten me a little bit here with my Internet connection. But anyway, let’s hope it all goes smooth from here going forward. All right, so, yes, I’m going to do a little bit of a demo of how we can get information and turn that information or data and turn the data into information using the flow platform. But I just want to step one step back before I’m going to start building a demo inside of flow. And I want to just show you guys what we’re going to aim at doing in this webinar.

20:03
Lenny
So this kind of dashboard that we’re seeing here in front of me is for a filling line, and it’s a little bit oee based, so I can see what my performance of this line is, and I can see what my total oee for the line is. And I’m going to try and build this dashboard with you guys on this webinar so you guys can see how we can actually get something up and running like this with data that’s already stored inside of my historian. Now, just to track a little bit back, before we start getting locked down into building these things, I just want to talk a little bit of where my data is coming from. Now, I’ve got a little simulated PlC environment, and what I’ve done on that Plc is I’ve connected an ignition edge device.

20:47
Lenny
Now, that ignition Edge device also has an MQTT transmitter installed to that. So that will act as my translation layer. So it will take the data that’s been captured by the PLC, it will translate it into an MQTT packet, and that will then push it off to my cloud hosted MQTT broker that I’ve got hosted in the cloud. Now, on top of this data, I’m also putting it into my Canary historian. Now, the great thing about the Canary historian is that it also natively have an MQTT collector. So it understands the MQTT protocol, and it can get the data from that MQTT server that’s also hosted in the cloud, and it can now store that data as time series historical data in its database. All right? And then on top of that, I’m going to add flow, which is our information platform.

21:41
Lenny
And top of that, I will then build out our little dashboard here. And in the scenario, what are we going to build out is this Oee demo. So just a little bit of an idea of the architecture, what we’re going to do. So just so you guys get an idea of the actual flow of data all the way up in the hierarchy until we can turn that data into information by using the flow information platform. All right, so just to maybe show that, I’m going to quickly go to my ignition project here. So this is my little HMI Scada project that I’m running on side of my ignition edge device. And if I browse my OPC tags here, I’ve got a little simulator running here.

22:28
Lenny
And if I’ve opened that up, you’ll notice that I’ve got some production data, and I’ve got information for what is the bad bottle count, what is the good count, what is the actual state that the filler machine is in, as well as what is the SKU or what is the current product that machine is manufacturing? What I’m going to do is I’m going to just going to add that inside of my MQtT tag structure. And by doing that, it will obviously then be able to send that out to my canary historian. So there we go. I’ve got my tags now that I’m interested inside of my MQTT packet. Now, on my Canary historian side, what I’ve done is I’ve created a little historian data block. I’ve called it element eight.

23:11
Lenny
And you’ll guys notice that I’ve got some tags writing in and out to that. And if I expand that up, I can see for today that I’m historizing quite a lot of data. And right at the bottom here, I’ve got my bad and my good counts. That’s now starting to be historized in my canary historian. All right, so we’ve got the data from our edge device into our historian, and I’m going to now go and add that into flow. All right, so I’ve got my little vm here where I’ve got flow installed, and I’m going to go and launch the flow configuration tool. So this is the tool that we’ll go and engineer our solution with and actually connect to our different data sources that we’ve got.

23:58
Lenny
Now, I’ve got a pre populated little project here for the webinar today, and I’m going to just connect to that. And when it opens up, you’ll notice that I’ve got quite a nice model structure already defined in my flow model. I’ve got some utilities management that I can do so I can monitor my gas usage, my steam usage. But what I’m interested in today is obviously on my production side. So I’ve got a filling line here. I’ve got filler line number two already built out, and I’ve got all the different components of Oee that I’m now actually going to measure. And I’m using all the data inside of my historian to obviously get the good production and bad production. So my challenge for today is to actually build out the same OeE structure for filling line number one.

24:44
Lenny
Now, obviously, I need to connect to my historian to do that. So inside of my flow configuration editor, I’m going to go to the integration section here. Now, you’ll notice that I’ve already connected to a few historians here, but just to give you an idea, as Graham mentioned, that we can connect to quite numerous historians. So from Canary, the Canary historian connector is the one that I used for the demo, and that’s the one that I’ve got connected here. So you guys notice that if I click on that Canary historian, it will browse the namespace of exactly those same tags that I’ve got in the canary historian. And if I open up one of these tags, so this is one of my flow transmitters. So you can see, I can see what the data is that I’m actually going to play with.

25:29
Lenny
Now, very important is that we never, ever replicate this data. We use the raw data to obviously create our calculations and aggregations that we require. But we’re not a historian, we do not replicate the data. That’s the job for the canary historian. All right, so let’s see how I can actually build this out. Now, hopefully someone in my organization was kind enough to potentially share a template of this Oee solution with me. So what I’m going to do is I’m going to go and connect here to my flow template server. Now, Gray mentioned that we allow for tiering inside of a flow system, and one of the benefits that we can use tiering for is an actual server that can host a lot of templates for me. So I’m connected here to a cloud instance of a flow.

26:22
Lenny
So this is actually sitting in an AWS instance and I’m connecting to it. And if I hit the test button, it will tell me that I have successfully connected to my template server. That’s great. So what I’m going to do here is I’m going to refresh that, and that will pull all the templates that’s available on the server for me. And there we go. There is a filling line or an oee solution template that I can utilize with all the relevant KPIs and measures already built out for me. So what I’m going to do is I’m going to actually use this in my demo here. I’m going to drag it and create a local instance of this template for me. I’m going to instantiate this now into my local system.

27:06
Lenny
Now, at this point, I can decide if I want to always keep updates from this flow template server, or if I really wanted to, I could have delinked it from my template server. And I will never, ever get updates as any of those changes get made, potentially made on that template server. But I’m going to leave it like this for now. What I am going to do is I’m going to release it. Just for me to be able to use it in production, I need to release that template, and in my model side, all that I now have to do is I need to go and extend my model. So what I’m going to do now here is just create a little folder structure for filler number one. Just going to move it up so it’s above filler number two in my hierarchy.

27:46
Lenny
And then all that I have to do is I have to go and instantiate this oee template of mine. So I’m going to drag it across into my model and this will now instantiate my template. And there I’ve got all the KPIs and metrics that I require for my OEE solution. Now you’ll notice that some of these measures have these little red dots next to them. Now that means that’s an error in the configuration that I need to take attention to. Now you’ll notice that the time and state as well as the bad production and the good production, I need to link that obviously to my historian tags that I’ve been historizing. So I’m going to go back to my integration section here. Also have a production historian that’s storing some information for me.

28:38
Lenny
And if I expand that, I’ve got my filler information here for filler number one. And you’ll notice that I’ve got a bottle count for the good bottles. So obviously I need to link that to my good production. And I also need to link the bad or the rejects to my bad production. I also need to link that to the time in state. So I’m going to link the state tag of the machine. So this is a typical state engine of what that filling machine is in. Is it busy in a production run? Is it busy with CIP? Is it maybe staffed? Or it’s got a bottle jam. So that’s a typical state engine of the machine. So I’m going to link that to my time and state calculation as well.

29:19
Lenny
All right, so if I open this up, it knows that it needs to retrieve the data from my historian. By default it will be set to an average aggregation. But obviously I would like to know what is the time that this piece of equipment is in a running state? Because what I’m aiming to do is I’m trying to work out what the utilization of my piece of equipment is. Now, I know this is a very simple example of how to determine utilization, but I can simply say that I know if this state engine of this filling line is in state number 20, that it is in a running state. So I’m going to look at that. The time and state also gives me data in milliseconds. Now, flow has the capability to already clean and aggregate your data for you.

30:08
Lenny
And obviously I can apply scaling factors and filter tags to do that for me. So in this case, what I would like to do is I would like to scale milliseconds to the total of hours that this piece of equipment is running because this is an hourly measure that I’m configuring. So I want to know what is the hourly efficiency or the hourly utilization of my piece of equipment. Now to go from milliseconds to 2 hours, it is two to the power of e seven. Now that is simply taking 1000 milliseconds divided by 60 by 1000. And obviously you get milliseconds to an hour, right? So I’m happy with this calculation. I’m happy with the configuration that I’ve made on those. So I’m going to close this editor from the time being. I also have my good and my bad production.

30:59
Lenny
Now if we look at the good and the bad production tags from a raw perspective. So let’s just open up this bottle count here. You’ll notice that this is a totalizer value. So obviously an average aggregation will also not be fine for this type of aggregation. What we actually need to do is we need to change these to a counter retrieval. Now flow has a counter retrieval that we can utilize, especially for totalizer values. And what we do is we also handle the rollovers automatically. So when that counter totalizer does roll over in the hour, we will automatically detect that and obviously work out what the correct total is for the hour. So obviously I need to do that for my bad product and I also need to do that for my good product.

31:46
Lenny
So I’m just going to change that to counter retrieval as well. All right, while I’m here, I can also just talk very quickly about this option here before I’m going to deploy these measures out to actually go and work out what these totals are. Just want to talk a little bit about this backfill option. Now obviously I can go and backfill and retrieve data from that’s already in my story and that’s been historizing data for months or even years. So we can get baseline utilization numbers or I can get KPIs and measures from historical data, not just from when I actually install and commission the solution or the flow information platform. All right, so I’m happy with that.

32:29
Lenny
So literally what I had to do was create time and state to know if my machine is running or not, utilize the bad and the good production to see what my quality component from OEE is as well as my total performance or my total production, and I’m happy with that. So I’m going to go and deploy this out. So I’m going to deploy this folder. It’s going to deploy all of the measures inside of here and the flow engine and the back end will now back full, it will get the data, it will run the calculations, and if I’m lucky, if I open up this utilization here, let’s give it a few seconds. It’s busy doing some processing of that. There we go. So just to quickly show you guys what happened here, I’ve got a utilization of about 94%.

33:19
Lenny
So that’s how much percentage this piece of equipment was running for the hour. And if I look at this in a little bit more, just tabular fashion, I can see what the value is for the specific hour, what the quality of that is, if there was potentially any loss of data in the historian and if this is the preferred version and what the current version is now, as Graham mentioned a little bit earlier, we always keep a full order trail of all the versions that a measure goes through and we version that up and keep those versions inside of our database. All right? So I’m quite happy. I’ve got my standard structure here, so let’s quickly build that out into a report. Now, I’ve got a reporting section here. So underneath my production, I’ve got already a production report for filler number two.

34:11
Lenny
So let’s quickly create a dashboard for filler number one. So let’s just leave it at the default name and that’s fine. And when I open up this dashboard, I’ve get a blank canvas where I can now start adding in some components. So let’s add a very simple time series chart here. So I’m going to go and create a time series chart. It’s going to be an hourly chart. And what I’m going to do on this chart, obviously, I can move it around and space it to see how much space it needs to consume on my dashboard. And what I’m going to do on this chart is very simply just plot my bad and my good production on top of one another.

34:49
Lenny
So I’m going to take the bad production count, I’m going to drag it onto the axes, and I’m also going to go and drag my good production count, drag that on the axes. What I’m going to tell the axes is please to stack these two together by its value and I’m going to say, please stack them as a column chart. So I want to go and recreate that column chart. That I had in my initial dashboard. And what I can also say is, when I initially load the system, how much data do I actually want to load? So I’m going to go and load two days worth of data inside of my chart here. All right, let’s see if I’ve got some data. So I’m going to open up my chart here. I’m going to refresh my flow server website.

35:32
Lenny
I’m going to expand that, go down to production, go down to Fuller. There’s my fuller dashboard that I’ve just created. Open that up, and I should see my very simple time series chart with my good and my bad production on top of it. Okay, perfect. So I’ve got some data going on my chart. Let’s go back to the configuration tool. I can also go and add a widget at the side here to tell me what my current performance is. So let’s add that. Let’s add a widget, which we are an hourly widget. Just going to resize it a little bit here. Open that up. And inside here, I’m going to drag in my performance KPI on this widget. I’m going to set it to be a gauge widget.

36:15
Lenny
And I can also go and change the heading just to say that this is the performance of the line. And I can give it a nice different color here. I’ve got a whole bunch of palettes that I can choose what the colors from, and I’m going to go and select that from that perspective. So now I’ve got a widget on the performance of my line as well. So if we go back to here refresh, let’s see if I’ve got my performance widget. Here we go. So I’ve got my performance widget that’s not looking very positive. So this performance of this line is a little bit below par. All right, so I’m starting building out my little dashboard here as we’re going along.

36:52
Lenny
Now, one thing that I don’t have at this point in time is I don’t know what the current product is that I’m running. I don’t know what is the performance per product that I’m running. So I don’t have that additional context that I can add to my data. Now, I can very quickly do that by utilizing our event system. Now, I’m going to add an event on this Oee concept of mine for this filling line. And what that event is going to be able to do for me is it’s going to be able to say, when did I actually start running production or not, and what is the actual Sku or the actual line equipment that I’m actually using to do that? So I’m very quickly going to build up an event here.

37:37
Lenny
This event will be my product run event and this is going to be for filler number one. And I’m going to open it up and I’m going to give it some triggers on when actually did I start and when did I end my production run. So I’m going to go back to my data source here. I’m going to go into my production historian and I’m going to use the state tag again. And I know that when I’m in state number ten, that’s when I start a production run. And I’m going to end it when I’m starting with my CIP cycle. And I just know from the data that I’ve got that CIP starts when I’m in state number 30. Now, I can also add different contexts to this event by the means of what we call an attribute.

38:27
Lenny
So I would love to know what is the different products that I’m running while I’m running these different events. So I’m going to add an attribute here. It’s going to be a retrieve attribute and this is going to be my product attribute and I’m going to link that to the SKU or the product that’s currently running on the line. So it will go and look in the historian what that product is and obviously it will then save it as my product. Now at the bottom here, I can link this to an enumeration. So if your product code is an integer value and you actually want to enumerate that to what the actual value is, we can do that by assigning enumeration lookup to it. So in this case, I’m going to link that to my product enumeration.

39:12
Lenny
All right, I can also go and say, maybe create like a little fictitious work order that I’ve got here. And this is one of the new functionalities that we brought into flow and that is the concept of calculated attributes. So what I’m going to do is I’m quickly going to go and create a retrieved attribute here. This will be my line that I’m currently running on. I’m going to give this a constant segment value. So this is going to be, I’m running on filling line number one. And I also would like to know what is the current period start of the segment. So let’s do that. Let’s add a new retrieved attribute and this will be my start. And let’s go and retrieve the actual period start of my start time.

40:03
Lenny
And I can also maybe say how many of these events have I actually retrieved or production events. So this will be my index of my event. All right? And that is just going to retrieve the current periods index. Now what I can do is I can take all of these three events and I can actually add them together by the means of our new calculated attribute function. So I’m going to do that and this will be my work order here. And I’m going to go and add that. It will populate a demo script here for us. I’m just going to remove that for us here. And all that I would like to do is just add these things together.

40:43
Lenny
So I would like to add the index with the line that I’m running on and the actual product to actually give me an individual work order. So all that I have to do here is to add them together. So I’m going to add the index and I’m just going to look at the first value for that and I’m going to go and add the second one, which is the line, and I’m going to add the last one together which is the product. Now, I know I’m going very fast and there’s a lot of options here of what you can actually choose.

41:24
Lenny
I would strongly recommend that you guys look at our documentation on all of the different functionality that we can have here so we can do that, hit the play button and confirm my calculation and this will now return a combined little attribute for me. I’m going to deploy this event here and that will go and create an event for me on all my product runs. And if I’m lucky, I would see what the actual product was that was running on which line it was running. And then this combined kind of work order with all of those attributes added together to give me a unique number that I can now utilize in my reporting. So that’s great. So let’s go back to my report and let’s add this as an event report here at the top.

42:08
Lenny
So I’m going to add this event baseline timeline and I’m going to go and add this run event to the top of this underneath the section, drag this run event in and I’m going to just give it a color as well for the different products that I have. And I’m going to say the index is also the product that I’m actually running. Cool. And I’m also going to say, just show me for the last two days, what the data is. All right, let’s refresh our chart. And let’s see. Are we starting to building out our information? There we go. So I’ve got a timeline of all the different states, all the little attributes. I can clean that up. It’s a little bit too much information, but you guys can see that I’m starting to build my little dashboard here. Perfect.

42:57
Lenny
All right, so let’s get a move on. There’s quite a lot to do, and I’ve got a little bit of time to actually do it. All right, so I’ve got my timeline here. Let’s create another widget on this side here to actually show me what my utilization is. So I’m just going to add another widget here that will be an hourly widget. And what I’m going to do here is exactly the same. I’m going to add my utilizations percentage into this chart as well. Again, I’m going to change it to be a gauge chart. And let’s see what my actual utilization of my filler is. What I also would like to see is how long was it running for the past hour. So I’m going to add a widget here. It’s going to be an hourly widget.

43:42
Lenny
And inside of this, I’m going to go and drag my time and state tag in here, and I’m going to make it a text widget. And something newly that we’ve introduced is the concept of a time base. So I can say it’s based in hours. And I would like to say, what is the hours and minutes that it’s been running. All right, so I’ve got that up and running there. And potentially, let’s just add one more widget at the top here to say what is my actual total production. So let’s drag that in. What is my actual total production that I’m looking at also going to be a text widget. And let’s just center align it, and this will be my total production. All right, let’s refresh and see how far I got. Perfect. All right, so I’m starting to see my total production.

44:30
Lenny
I’m seeing my utilization. I’m seeing my performance, and I’m seeing my actual utilization for the past hour. How long have I actually been running this? Now, I’m not going to have time to build out this complete dashboard, but I hope you guys see how quickly and easy it is to actually build it out. So let me focus on the one that I’ve already done, which is my production dashboard here for filler number two. So just to show you guys that this should have been the end result here, you can see that I’ve almost got there. I’ve got my time and state, my performance and my production. At the bottom here, I’ve also included a scattered chart with my production to see if there’s any product or any speed setting on my line that might cause me to utilize more or bad production.

45:18
Lenny
And in the middle here, I’ve got my regression analysis on my actual production so they can see in the future, potentially at the end of the year, what I’m going to end up with from that perspective. Now, we’ve added in a few more charts and charting functionality in this version of flow. So let me quickly go into that just to give you guys an idea. So as an operator, I will go and be able to obviously classify my downtimes. I also have an operational dashboard where I can now look at all my different run events as well as a new form type that we’ve introduced where I can actually change my limits right here in the front end so I don’t have to do that in the actual configuration tool.

45:58
Lenny
Again, we also included the capability on an event form to link this to measures and to show that on the form. So if the quality department would like to know for these bad products, what was the individual quality components due to breakages, crown issues or rework, I can go and show those columns and I can start populating the values here inside my form here so that I can actually go and create those quality measures as well. I can also go and change my limit configuration right here in the form. So let’s change this potentially to 36,000 here. Hit the button here. I can confirm it. And another feature that we’ve introduced is that I can actually use these targets and high limits inside of my calculations. In the past we have to do that with manual measures to potentially have a rating.

46:48
Lenny
But you’ll notice now that if I refresh my data, my performance calculation will recalculate. And there you go. It went from 107 to 101 because I’ve changed the target value right here on my limit form. Another concepts that we’ve added in here, you’ll notice that all of these breakages has now been populated and it’s been split into the different hours. So I can now actually go and create a comment on here to say why was there so many issues? We just say there was bad alignment in the filler. Right. Now if I look at my operational classifier or my quality dashboard. On this filler. We also introduced the concept of a Commons chart.

47:35
Lenny
So now you can see for those breakages, crown issues, rework and quality that I’ve got a new comment, and that’s just that comment that I’ve just added in here as well. I can see the difference split between my bad bottles and my good bottles, and I can see what my actually quality for this line is with these breakages. On rework, crown issues and breakages that we’ve got here, we also introduced the concept of a little bit of drill down functionality. So you’ll notice that this bad production at the top here has a link that I can actually click. If I click on that, it will actually open up a separate dashboard and I’m back to that quality dashboard where I can now go and classify and change my limits, et cetera.

48:13
Lenny
So that’s quite a nice feature that we’ve included in this release as well, to be the ability to actually drill down into a different flow component from a comparison chart perspective. All right, so hopefully you guys got a good idea of how quickly and easy it is to actually configure, connect, to get something up and running and to start building out a solution and to actually get to an end state like this. From that perspective, we’re running a little bit out of time, still have to do a little bit of Q and a as well. So I think with this, if there’s any questions around what I’ve done, obviously you guys can just shout out to me after this webinar, and we can maybe have a detailed one and on section with you guys as well.

49:00
Lenny
So I’m going to go back to the presentation here and maybe just go through a little bit of the commercials of flow as well. All right, so flow is licensed by the system or the instance that you are running. And the price you see there is the price for a perpetual solution. And so that is to own on your own server the price you see at the back end there is also our subscription model, so you can also have that as annual subscription. From a license perspective, it gives you unlimited users, unlimited reports and dashboards, unlimited clients to actually access it. It starts out with 100 measures and ten events, and then you can actually buy them one at a time. So if you really need one or two new metrics, or maybe 100 or so, there’s no more packs.

49:51
Lenny
So you buy them at a metric at a time. So $5 for a measure and $50 for a metric. And at the gray there, you can also see the subscription price, which is $2 per annum from a subscription model, as well as $20 per annum for an event on the subscription model. All right, just quickly, before we end up with the Q and a. So from what’s next, you can obviously go to our website. You can download a fully functional trial version of Flow, a 30 day trial. If you are currently on support, you can go and upgrade your current instance. Our version is available. Also, please go and look at our support website support flowsoftware.com. The release notes of all the different features and stuff that we’ve included in this version is there.

50:46
Lenny
There’s also a whole bunch of support knowledge base and articles that you can access on our support website. Now, before we end off, I forgot a little bit, we actually have a poll that we’re also running. So, Clarice, if you can maybe launch the poll, it’s just a little poll that we want to see what your thoughts are about. What do you feel from what I’ve shown you here today, from getting information from data, what is the biggest three challenges that you guys foresee? You can select multiple of these answers, so we can just get an idea of what you guys think is the stumbling blocks of why people is struggling to get data or get started with the information management journey. All right.

51:40
Lenny
And I think when we’re done, Clarice will, after the Q A, just give us the results of that poll as well. All right, Q A. So do we have some questions, Clarice?

51:58
Clarice
Hi, Lenny. Hi, everybody. Yes, we have one question from Brian Lee. He’s asking, is both canary Labs and ignition with the ability of creation of dashboard, in what scenario the use flow compared to the other two?

52:15
Lenny
Okay, good question. So, yes, both ignition and canary can create dashboarding as well. It’s more for real time dashboarding and KPI purposes. What flow does or bring to the party is to actually create aggregations of your KPIs, and it also gives you the capability to add context to it. So that work order run that I’ve created, that’s a whole bunch of new context that I was being able to create and add on top of my data. What we’re also seeing is, as we move up in the information management hierarchy, the kind of real time data that we require for making decisions gets less and less.

52:56
Lenny
So for people sitting in managerial kind of scenarios that needs data, they don’t really need to know what the real time, every millisecond, kind of fluctuation of a tank level is as an example for them to make a proper decision is to know what is the actual deviation of that tank, potentially every hour, so that they can know if they need to create stock. So flow does that very well. It gives us that capability to create these time buckets and time slices, to give us information based on the hour, the shift, the day. And then very important is to add that additional context to that data by slicing those information by the event context that we can add.

53:38
Clarice
Thank you. And then we have another question.

53:42
Graham
Yes, Graham, sorry, I just wanted to add to Lenny’s point there. I think it’s important to understand that what flow does is actually transforms your raw data. So there’s a transformation process there into information. That aggregation process, that calculation process, the secondary and tertiary aggregations on previous aggregated or evented data. These are concepts that you would find in the IT world with data transformation. And what we’ve done with flow is we’ve brought it into the industrial data world with that information management. Thanks, Clarice.

54:27
Clarice
Thanks, Graham. Then we have a question from Nada Lopezer. Can you use other measures in the calculator attributes or only attributes from that specific event?

54:40
Lenny
So you can use attributes from the specific events. What you can also do is you can actually use attributes from other events. So I can add attributes from another event into my attributed calculation. But obviously all of that information is still available. All of the attribute values is still available in the measures calculation. What I’ll do is after this, I’ll share with you a little bit of how we can actually get all my attribute values in a measure, other than getting the measure values in an event attribute value.

55:19
Clarice
Then we have one last question from Barthefrise. Can we have structured comments? Kind of a drop down?

55:27
Lenny
Yes. Okay, so that’s one thing that I did not, which I can potentially very quickly do.

You might also like