Search
Close this search box.
Build Your Digital Infrastructure. Register for the ELEV8 2024 Event Today.
By Elian Zimmermann
15 July 2021

#2 Changemaker Community Live!

Share

Watch The Quarterly Update Recording to Hear The Latest On Ignition, Canary & Flow.

Food & Bev Industry feature:
Data to Value: A tiered approach with Rowan Ray, AB InBev

Data and its value need to be realised through visualisation. You can have plenty of data but if no one can see it, or if it is not portrayed in an effective manner…it becomes worthless. Watch our conversation on Data to Value: A tiered approach with Rowan Ray from AB InBev.

We’ll also talk through the latest release of Flow 5.4.0, some of the new features and enhancements in Ignition 8.1.5 and Canary Version 21.3

So, what’s new in Flow 5.4.0?

Watch the video to see it in action.

NEW FEATURES:

The new release includes some exciting new features like model security, InfluxDB as a data source and event trigger enhancements, to name a few.

USABILITY:

We’ll show you added engineering usability features like copy/paste, import/export, and using templates for events and charts.

FIXES AND UPDATES:

Apart from a number of performance enhancements, we’ve added support for .Net Core 3.1 and .Net Framework 4.8, as well as OS support for Windows Server 2019.

New How-To Guides!

SPEAKERS:

Leonard Smit
Customer Success Manager
Element 8
Jaco Markwat
Managing Director
Element 8
Rowan Ray
Tech Supply Specialist
AB inBev

Transcript

00:00
Jaco
So good morning to everybody, and especially if it’s your first time, thank you for joining us. And we promise to make it valuable. So team on this side this morning, maybe we can kick off with the introduction of Lazya. Let’s quickly get there. Fantastic. So on our side this morning, my name is Jaco. I should have probably done that first. Jaco mudpot. I look after the team here at element eight. We are very excited to have Rowan Ray with us. Rowan is with Abmbev tech supply specialist. Rowan, thank you very much for joining us.

00:35
Rowan
Thank you very much. I look more handsome than that picture.

00:41
Jaco
We look forward to chatting with you a little bit more about data, the value of data, and some of the approaches that you have at AbMdev. So thank you for joining us. Then, of course, Lenny, you know Lenny Smith. Lenny looks after our customer success team. And then we have Tabelo Merced, also on our customer success team. And sitting next to me is Laura Lee Stradum, calling you Laura Lee. Laura Stradum, also part of our customer success team. So this is who’s with you today, and we’re going to share the load in terms of who talks through what with you this morning.

01:16
Jaco
The agenda for today is we’ll kick it off as always, with our change makers or the people that are really the heroes in our community and our story, and give you an update in terms of certifications, new partners, and just new members of our community. Then we’re going to talk through data to value, which is our conversation with Rowan. Then we probably thought we’d do it a little bit differently this time in terms of how we present what’s new and updates, because it could be quite overwhelming and you see different updates and links and shares. And we thought we’d do it per flow ignition and canary historian category today, as opposed to throwing it all together under one enablement slide. And hopefully that’ll make it a little bit easier for you to get updates depending on the solution that you’re interested in.

02:03
Jaco
And then we close it off with what you can look forward to over the next month or next 60 days or so and anything else that’s most important to you. So really, the message is, if there’s 1 hour that you can spend with us or spend on watching something that we’ve produced or shared over the last little while, this should be that 1 hour. All right, some of the new partners that we have on board, I think we shared last time in the previous CCL that Afrilek obtained their gold certification very exciting that they’ve also in the meantime achieved their IIoT certification, which is done through Siriuslink, which is a fairly new certification. It’s quite a nice and exciting one, very useful and informative one to have. And they have also very recently certified on Supersoft recipe changeover module certification.

02:59
Jaco
So that makes Afrilec our second supersoft MES certified system integrator. So that’s a really good achievement. I know the team from Afrilek has put a lot of work and time into that one. CSS up here in hating and down at the coast, Peter Finte and the team, they are obviously ignition certified and flow certified. They add the canary certification to their portfolio or to the team. I think we had three people certify.

03:27
Lenny
On Canary with the Durban training, with.

03:30
Jaco
The Durban training from CSS. So that’s a really good achievement for Peter and the team completing their trifecta of certifications. And then we have new ignition certified integrator, registered integrators in RBE automation. Welcome to the team from RBE automation based in Samin. Exciting to have somebody on board from that side of the world. And we have also Diltron based out in the West Rand. I think Peter and his team out in the West Rand. So really exciting to get new registered sis on board for ignition. So welcome to you as well. Just in terms of new customer success, a couple of the stories that we can share, that we would like to share, just to highlight a few. VCPRaxis did an incredible, very exciting to see some MQTT stories and applications coming through. VCPRaX has done some really good work with Zantech.

04:24
Jaco
They do the oxygen emergency care gas solutions at various very nice remote solution. MQTT story from Bcpraxis if you’re interested in that, there’s a couple of photos on some of the work done. Very, very tidy solution there. Integ also at the same time, MQTT solution, MQTT and edge remote monitoring and bringing the data on board. And that was done for Badakrifi municipality in the Western Cape. Also lovely to see another MQTT solution over there. Advances for Kenmara resources. That was an ignition specific application that Brahman, the team are building out for them over there. And they’ve also extended some of the flow use for CCBA across some of the african sites outside of south african view. And then interesting one for us, very exciting for the team at SGS.

05:21
Jaco
They of course do some of the APC solutions, mineral services, APC advanced process control solutions and they are using ignition as a front end for that solution. For Polyus, who’s, I think, one of the biggest coal producers or russian based. Russian based coal producers in the world. So we’re very keen to do a site visit in Russia, but really good work that the SGS team is doing over there. So well done for that. And then we have Vennic, who’s really do a lot of work in the water space. Also, again, linear an edge application for water care mining. So we definitely see a lot of the edge NQTT applications coming through, which is, I think, where the trend is, definitely.

06:09
Lenny
And we’ll share the links, obviously, for the IoT certification a little bit later. But I think that’s probably one of the key takeaways is to register and see if you can get your IoT badge. We definitely see that it’s becoming a lot of these applications on the screen right here is using that. So good to get that badge under your belt as well.

06:29
Jaco
Definitely, yeah. And again, congratulations to everybody on here. It’s nice to see the three certification logos for a couple of the partners, as well as some new partners. Look forward to working with you all. All right, so we asked Rowan a couple of weeks ago or not a couple of weeks ago, probably a couple of months ago, and we said, rowan, we would like to chat with you about the story in terms of your journey with. I don’t think we knew what we called it back then, but data to value, that was the message that we wanted. And we would like you to share some of your thoughts and some of the steps that you did and went through and your approach and share all of that with us. But you’ve got 25 minutes.

07:17
Rowan
Thanks a lot, Jack.

07:17
Jaco
We are so I think you’ve been able to, as we progress through the chat and the conversation, I think you’ve been able to encapsulate or at least summarize that and a couple of key thoughts.

07:27
Rowan
Yeah, 100%. Thanks, Jacqueline. Thanks, Tim, for having me. I wanted to just touch on how we extracted value out of the data that we obviously have plenty of across our african landscape, and obviously touch on how we utilize flow to do that. But obviously, this is not a session to pump flow. This is just a session around good practices and how the decision making process went through. And then I’ll touch on the actual solution that we implemented. But I think the first thing that I wanted to highlight is data is valuable, without a doubt. But is all data valuable? I think with a lot of the chat, a lot of the big words coming out of industry 4.0 is AI, machine learning, and data lake.

08:23
Rowan
The message coming from a lot of your suppliers is just throw all your data, whatever data it is, no matter what it is, just chuck it into a data lake, and somehow you’re going to get some value out of it, right? Promise you 20% improvements in your efficiencies or whatever it happens to be. And I think we must just temper that excitement a little bit, because without a doubt there’s value there. But I think when you throw too much data at something, and if you don’t understand your problem, don’t understand your process, which we’ll get to now, you just cloud the issue. You don’t know what you’re looking at.

08:56
Rowan
And the Hail Mary in football, you just toss it down the pitch and hope that someone catches it and someone gives you some value, and it just ends up chewing up resources, time and money. And I think that’s my first thing, and that’s one of the biggest.

09:10
Jaco
I like that, because a couple of years ago the message was we can’t access all our data. And I think now, again, we are often saved with the advent of the cheap sensors and devices and the availability. Now that we have that access, let’s just get all of it, regardless of whether there’s context or an understanding of what this value is that I’m looking at, but let’s just get all of it and throw it in one place.

09:31
Lenny
I think we speak about it a lot in our podcast sessions, Jaco, that there’s this notion that 100% the new industry 4.0 is making it easier. Right. The technology is becoming easier for us to make the data available, but we must be very careful to move the problem just from one layer to the other layer. 100% spot on. Take all that data that’s currently on your plant layer, just put it in the cloud. Very often, there’s no context related to it. Someone needs to sift through that data, make all the changes. You’re just moving the problem from your.

10:02
Rowan
Ot layer to your it layer.

10:03
Lenny
And that’s a big thing that people need to understand, is to fix that data layer already with the right context and the knowledge of the process.

10:12
Jaco
This is the starting point. I mean, this is the genesis thought or idea. Is all of it what you need? Is it valuable?

10:19
Rowan
Yeah, 100% our step one. I suppose what I’d advise is know your problem and know your process. Right, the process and knowledge thereof. So you need to know your inputs and outputs to your process you need to know what effect, what is the process. I suppose when I put these slides together, I was thinking, because I come from a brewery, let’s talk about your brew house process, right? You’re going to put some malt in, you’re going to put some adjunct in, or whatever it happens to be, some water. And what are your outputs? Your outputs, then you can get some sweet worth that you can ferment and get some beer out of. But if you don’t know what influences the outputs, which leads me up to my next point. What are you trying to solve? What are you trying to improve? All right.

11:06
Rowan
If you don’t know what you’re trying to improve, you don’t know what you’re trying to eradicate, and you don’t know where roughly where to look. I mean, you’ve got subject matter experts in brewing. You’ve got subject matter experts everywhere in your business, right? If you don’t, what are your overheads? What are you paying people for, right?

11:22
Lenny
Okay.

11:23
Rowan
They should know their person. They should be good at. But I find, I think with the industry 4.0 revolution, sort of like to what you were saying, it’s like people are trying to outsource their problem, right? I’m just going to check my data, then someone else will look at it for me and tell me what my issues are. If you try and do that, you’re going to end up in trouble because it means that you as a business aren’t owning your problem. You’re just outsourcing. You just think hoping that the bulk data is going to get you that. So I think that’s what I wanted to emphasize. You need to know your problem and you need to know your process. Okay, what I’ll get to in our use case was the process. What we wanted to try and improve was our daily routines.

12:01
Rowan
And so it doesn’t need to be like a physical process. It can be just your daily routines. How do you make them more effective? How do you put the right data in front of the people so that they, as individuals and as resources to the business become better utilized? I think that’s the key thing. So the second thing I wanted to talk about is standardization, and how standardization is the key, right? So from a technological viewpoint, standardization is critical to your success. Standardizing a philosophy and a universally accepted protocol, not the supplier.

12:33
Rowan
And I think that’s also where, and I’ll be the first to admit that I think where as the old sab, if you look at the products that we use, you could actually say, that we’re probably tied to a supplier, but we are trying really hard to try and change that concept around and say, what are we getting from, what do we want to as an end goal, as a deliverable, and choose the philosophy. So as an example, we standardized beyond our control modules, right, control modules. For anyone who knows, control modules, equipment modules, batch processors, s 88. We wanted to design control modules that were mutually exclusive from the hardware in which they were applied. So we have different plcs to our brews and Bradley Schneider’s Siemens. But the standardization of the control module was really around design and functionality that we wanted.

13:26
Rowan
So that’s what I’m trying to get to. The philosophy is, what functionality do you want out of something? Forget about where you’re going to apply it and just choose a philosophy.

13:35
Jaco
And this is what’s going to help you scale 100%.

13:38
Rowan
Exactly. Yeah, you want to be able to scale quickly. If your philosophy is right and you find that your supplier or your hardware that you’re using, whatever it happens to be your software, your tools, are not giving you that, remove yourself from them and keep your standardization on your philosophy. I also talked about universally accepted protocols, and this brings back to the things you highlighted in your change makers. MQTT, it’s a universally accepted protocol.

14:06
Jaco
Likewise with OPC.

14:09
Rowan
Try as a business, as a company to get aligned to those, because if you are aligned to those are philosophies, the protocols, those are not suppliers. And I think that’s a key one that I wanted to raise. And the last point is where within your vertical data stream we standardize, obviously have pros and cons. Okay. As sab, we chose to standardize it as low down as we could.

14:35
Jaco
Right, as low as possible.

14:36
Rowan
We were lucky. We had very skilled resources, control engineers at the site and everything that really bought into the concept and really were good enough in their skills to execute. But when you try and standardize a lowdown, that’s a lot of effort. That’s a lot of effort because you’ve got so much down there.

14:57
Lenny
Yes, it is a lot of effort, but if you don’t standardize low down, we talk about it. Try and standardize as low down to.

15:04
Jaco
The edge as possible.

15:05
Lenny
Because if you don’t standardize there, you need to standardize just the layer above it somewhere. You’re going to need to do the work.

15:13
Rowan
Exactly.

15:13
Lenny
And the problem is, the further decoupled you are from the edge, the OT guy knows the knowledge of the sensor, the scaling, the range, all of that. The further you move up the information chain into the IT space. Now it’s it guy.

15:27
Rowan
Yes, correct.

15:28
Lenny
Now someone needs to tell him that. What’s the range? Exactly. So the lower down you can standardize. Moving that data up becomes simpler and much easier.

15:36
Rowan
You’re 100% right. So we asked, how many secrets on the latte web? That’s one of our biggest issues when we became AB in Devon and we really started focusing on the different zones, obviously, north, mek, middlemen, all the rest of the other zones, you have different standardization, every single zone. But we want a global company. So you need to have apples for apples that the directors are looking at right to. What you’re saying is, now, how do you standardize at that layer? Where do you standardize? You can’t force every single brewery around the whole world to use the same control module. That’s just a task beyond, it’s huge. So then you try and standardize at the level that you’re talking about. But that’s difficult, because now, how do you get data from there to there? Do you use SQL extractions?

16:27
Rowan
Do you use OPCA, MQTT stuff? Do you use file share systems? And you’re right, then you just shift the problem up to a level that is probably not where you should be standardizing. But unfortunately, in big companies, you’re faced with those tough decisions. And I think we’re trying hard now to try and standardize. I know that the North America zone is also talking a lot about flow. As an example, once again, I’m not punting flow, but they’re talking about using flow for a similar thing that we’re trying to do. And that also makes it a lot nicer, because now you have all your data kept in the same sort of database and same structure.

17:04
Jaco
This universally understood and familiar way of reading reports, seeing correct kbis.

17:11
Rowan
Exactly. You all speak the same language.

17:13
Jaco
Exactly.

17:14
Rowan
So I think that’s the key of standardization.

17:16
Jaco
Pretty good point.

17:18
Rowan
I’ve hopped on about it. But standardization, it’s simple, right? Okay. A lot of people talk about it, but a lot of us don’t really get it. Right. So I think the key thing is, if you’re going to standardize with a deliverable in mind. And that deliverable is typically consistency around what you’re delivering. Either it’s function or it’s data structure, whatever it is, philosophy and protocol, not supplier. So that I spoke about. Right. And where we standardize is pros and cons, which is also what I spoke about. Okay, lower level is better, but requires more partnering. Suppliers require all the rest and higher up is more manageable, but sometimes you’re going to lose your value because the people you’re dealing with don’t understand the data, really. Okay, so that’s step two.

18:00
Rowan
And then I thought, so now I’ll start leading to, really, where we’ve implemented the solution we implemented, which was vertical data migration. So just a little bit of background for those people who weren’t able to watch the element. A conference last year. But, sab, we’ve got quite a few breweries throughout Africa. We got, like, 28, 29 breweries somewhere there. All right.

18:22
Jaco
A lot of acquisition sites as well?

18:25
Rowan
Yeah, for sure. Okay. And a lot of new builds as well that we’ve done. But the old SAP never really focused.

18:31
Jaco
On the rest of Africa.

18:32
Rowan
It was always focused on South Africa. So you go to south african breweries, and you’ll find a lot of standardization, and a lot of people speak the same language and use the same tools and all the rest, but we didn’t, and this is an indictment on us.

18:42
Jaco
Quite a strong tribal knowledge. Tribal understanding how things work and how they.

18:47
Rowan
Yeah, exactly. And it was so easy. You could take a control engineer from one plant, pop them down there, and you’re still running. Okay, maybe a little bit of, like, maybe anything used to coding in Schneider versus it, but the concepts are the same. Once again, standardization around concept and philosophy. But an indictment on us is that we didn’t concentrate on the rest of Africa. Right. We really neglected them, I hate to say, and we’re trying hard to bring it back. But one of the big things that we found was that there isn’t standardization throughout the rest of. So with a big thing, we wanted to try and improve.

19:20
Rowan
And once again, I speak to the process is we wanted to make sure that the daily routines, which is ab inBev principles, BPO principles, best practices, and stuff that you live by, that they were being applied correctly and as effectively as possible. And for that, we decided that we needed some form of a dashboard that could be in every single meeting, whether it’s energy and fluids, meetings, plant meetings, maintenance, brewing, packaging, whatever it is. You could look at the same type of dashboard. And importantly, you could have your key KPIs, which we all know as an upper level business decision. They give you value, they drive efficiencies. So I think that’s what I wanted to point out here.

20:05
Jaco
Know your problem.

20:05
Rowan
Lack of standardization. Before it goes Africa, there was no single point of truth. And data got corrupted very often because there’s excels floating around everywhere I’m sure you’ve heard a lot about that.

20:18
Jaco
That’s quite common.

20:19
Rowan
Quite common, right. Okay. The need to offer simple, effective tools or brewers in order to get data in the same format quickly, and then what data was necessary is the fundamental plant efficiency data that speaks to work with teams, and the efficacy data which would give upper level management, this is also a good one, is upper level management needs to be able to see data rapidly. And that’s also, you’ll see, I’ll lead to that while we get to it now. But if upper level management don’t know how the plant is doing right now, how do you make decisions? Where do you put your efforts and your resources? So step three, I suppose, is flow and standardization, at least at a reporting level. So the phase one was flow instance was created, each plant, right. The standardization of reporting there.

21:00
Rowan
And the reporting level was chosen because we just unfortunately couldn’t shut down any lower. We couldn’t standardize in PLCs just because.

21:06
Jaco
There’S a different variety of standards, protocols, hardware systems.

21:11
Rowan
Correct. And typically within the african realm, their philosophy is, or our philosophy, sorry, not. We are all one now. So our philosophy in Africa at the time was, let’s get an OEM like a crohn’s or KHS, who would do lions, or Waterloo, or Talbot and Talbot, who do BTS plants. And we would get them in, they would set up the whole thing, but they apply their standard, right, and they leave, and then you’re left with a system which is completely different to this one or that one. It works perfectly, but they’re very disparate systems. And so it was too much effort to try and standardize at the coding level. So we shifted one up, particularly standardized the reporting layer. Cool.

21:53
Jaco
Then I suppose the tool in this case, which is flow, the software in this case, which is flow, made it possible to do that.

22:00
Lenny
At least.

22:01
Jaco
There wasn’t any specific requirement around any proprietary, anything for flow to be able to do that.

22:07
Rowan
Yeah, 100%. I’m glad you pointed. Flow is not, you don’t have to have siemens, you don’t have to have this. Flow is independent of those. And that’s what’s beautiful. You can pull information into flow from a variety of sources, SQL databases, manual entry, which a lot of Africa still is. And the idea is that. And the reason we also went with flow is manual.

22:27
Jaco
Manual entry.

22:27
Rowan
Data entry, yeah, manual data, correct. There’s still a lot of data in manual data entry, but our goal throughout Africa is we want to get the rest of the african breweries connected, similar to how we have our south african breweries, where you have a connected layer and infrastructure that enables you to pull data from that Plc and that Plc stored in the historian. In the historian.

22:47
Jaco
Okay.

22:47
Rowan
It’s key. And then it’s very easy to change within flow, to change from manual entry, just pull it from the data source.

22:54
Jaco
And in your case, the historians would be Aviva, historian throughout South Africa and other potentially in key, the rest of Africa.

23:02
Rowan
Yeah. So our legacy system, and one which we are happy with at the moment.

23:07
Lenny
Okay.

23:07
Rowan
But Aviva is our historian.

23:11
Jaco
And that’s what you standardize.

23:12
Rowan
Yeah. And that’s what you standardized on. But as I say, went for functionality and what we want out of the product. And flow has a really nice built in capability to pull that from the previous story. So that’s also why went with that, but, yeah. Okay. So the problems that were solved resolved. Our employees are entering data into the same tool on site. So once again, standardization of the tool being used. There’s an audit trail of who, when, and why, and there’s single source, the truth. So you eradicate those issues. Okay. There’s no more arbitrator. Johnny entered it there and cpointed. Then where’s the paper? Yeah, where’s the paper?

23:46
Jaco
Where’s your signature? Made a note there.

23:48
Lenny
Exactly.

23:49
Rowan
And then operators just become more efficient with fewer points of duplicate data capture. That was step three in getting that layout done. But now there’s so much valuable data now it’s being entered at the breweries. How do we get that up? Right. And we want it quickly. So step four was standardization. Sorry, before I get there, is standardization of what is being reported. Right. So the tool that we use is flow, but we needed to standardize around what they were reporting on.

24:13
Lenny
You mean the KPIs. And the calculations to derive it.

24:16
Rowan
Exactly. So we created a template which spoke to KPIs that we know guide efficiency, that we know improve your plant, and that we know you should be discussing your packaging meeting in your packaging meeting. If you’re not talking about packaging beer.

24:35
Lenny
Loss.

24:37
Rowan
You’Re missing a beat there. Why are you having a meeting? If you’re not talking about Gly and lef and efficiencies, why are you having meetings? So we standardized around that, because that’s another thing we found throughout Africa. There was pick and choose of what we feel as a brewery is important. And don’t get me wrong, I can still do that. But above and beyond that, you have to talk about this stuff you’re not talking about this. What are you doing? So we templatized those KPIs that we.

25:04
Jaco
Wanted people to report correct standards, templates.

25:10
Lenny
That’s your design.

25:11
Rowan
So we standardized on metrics and measures. It was department specific reports, as I was saying. So those are your dashboard. We call them our BPO dashboards and entry forms. Manual entry forms. Because as I specified earlier, not all of Africa have databases that store their pack volume and all the rest. Okay, so that was entry forms and manual entry.

25:32
Jaco
And Rome, by the way. Sorry, what I didn’t mention is if anybody that’s online with us, if you have any questions, we’d love to have a conversation with you. With us. If you have any questions, please type your question into the Q and A. We have a comment from Graham on your point about the model. Graham’s comment is, in the world of acquisition, it’s not often possible to standardize at the edge. So we need to recognize the need to standardize the model as Rowan and.

26:00
Rowan
100%, yes, we standardize the model at least. And then all brewery instances have the template server configured templates. And they pull the templates down. Right. So as a zonal team, and I think it’s evidence on the picture, as a zonal team, we decide upon with our leaders and our department heads, what KPIs they want to report on, or what they would like to see reported on at all the breweries. And that also means when they head up the breweries for their audits, they know that everyone’s looking at the same dashboard. And then they can compare apples with apples. Okay, if I’m going to judge you harshly, it’s because you’re not entering what that plant down the road is entering.

26:35
Jaco
How many breweries in Africa zone?

26:39
Rowan
28 to 29 somewhere. Yeah, it’s big. Yeah, it’s very big. And that’s not including our vertical operations either. So we’ve got maltings plants in Elrod, in Caledon, we’ve got in Lusaka, in ginger. That’s another big thing. We’re moving across this exact model for the maltings plants as well. And then the problem to solve is obviously, and I alluded to this earlier, but the valuable data for the use case is being reported, right? So I’m not reporting everything. I’m not asking them to enter 400 data points. It’s what is valuable and what’s going to guard the leadership when they eventually see what’s going to guard the team that site, to make better decisions. Standardizations of KPIs across Africa and also the KPI definitions, because that’s also what we found, is you’ll go to a site and they’re reporting energy efficiency. What’s your energy efficiency?

27:28
Rowan
How do you get there? You go to another plot and their energy efficiency calculation, you’re like, but then you’re getting an a plus on your report card and you’re getting a D minus and you’re like, well, why?

27:40
Jaco
It’s because they’re using different calculations.

27:42
Rowan
That’s also what we have to compare light for light.

27:45
Lenny
Right.

27:46
Rowan
And then I just wanted to show one of our typical dashboards. So this is from plant ndola, which is in Zambia, and this is a typical example. So we had a daily, a weekly and a monthly dashboard. And this is sort of what it will look like. You’ve got your different sections, your safety, your environment, your quality and your performance. Once again, these are zone mandated KPIs that they need to report on. And this speaks to, firstly, are they entering the dates that they should be entering? And just nicely, from a snapshot, you can see, am I green, am I red? Just a visual indicator of where you should be focusing.

28:23
Jaco
And this weekly looks the same for all the sites?

28:26
Rowan
Every single site is the same weekly dashboard.

28:30
Lenny
Obviously, you don’t spend time discussing the stuff you don’t need to discuss in.

28:34
Rowan
The meeting spot on, because you end up with a list of like 400 KPIs. How in a half an hour meeting do you even get past the first ten? You just don’t. So let’s streamline the data. Let’s understand what is valuable data. Not all data is valuable. What data is valuable? And let’s discuss those points. And that’s what we tried to get to right now. You can really see, like, the standardized packed volume versus planned volume. They obviously aren’t producing as much as they should be.

29:01
Lenny
Right?

29:01
Rowan
Easy. You can check it right there. So that’s a typical dashboard right? Now, the key thing was we’ve got all this valuable information at the plants, and we need this up at the zone level where our zone leaders, our heads of packaging throughout the whole Africa, they can compare these plants on a daily basis. How do we get the data there? How do we get the data? Traditionally, it’s always been a huge problem for us because everyone’s using Excel sheets, are all using different Excel sheets. You now got to send the email, the excel, and then you’re like, no, I’ve sent you my results, but which one is it? Is it version two? Underscore and so people would email this in. And now, unfortunately, your zone heads have to spend three days out of a month just compiling, just putting all the data together.

29:52
Rowan
What a dog show it was, just not on.

29:56
Jaco
And what a massive window that’s open for human error. Again, human error. Whenever there’s any manual anything, there’s a big chance of human error.

30:06
Rowan
Yeah, exactly.

30:08
Jaco
One of the biggest sinners in our industry.

30:10
Rowan
Yeah. Not intentionally so, but just unfortunately, it just happened. Also, the visibility of all this data was never there before, because obviously it’s all in Exxon, you with this big spreadsheet. So let’s say I was in packaging and I wanted to know how much water they were using. As a packaging manager, it’s not one of my KPIs to know how much water the whole site is using, but I’d like to see. There was no visibility on that. You’d have to walk over and have a chat to the energy and fluid manager and he’d have to pull up some excel sheet. So we wanted to get rid of that and we wanted to make data available and visible as quickly as possible. So step five was really driving the data upwards.

30:51
Rowan
So what we did is we created a zonal instance of flow and we set up replication or integration from all of the sites across Africa. And so data was being replicated up into this Africa flow instance and so bulk replication was configured at each site. There’s like 120,000 measures that’s going to go up soon. We actually develop into new logistics dashboards. So that’s going to bump up to probably about 200 and 6170 thousand, Mark. Yeah, that’s the thing.

31:24
Lenny
When you see the dashboard in the previous slide, I don’t think you really get the amount of data that goes up and through.

31:31
Jaco
On the topic of the dashboard, Dylan asked online whether it’s possible if there’s any drill down capability of a dashboard. So, in other words, anything that is red, for example, out of scope, out of measure. If you have any capability to drill down into any of those values and data from the dashboard, you can.

31:49
Rowan
I don’t know if you want to speak to Zen from a flow perspective, but, yeah, sure.

31:52
Lenny
So let’s look at that month to date value. So obviously, that month to date value, that 148. That’s right there. If you click on that value, it will actually show you all the values that was utilized to get to that month to date value to make up the total for the month. You can also see if it was a calculation. We will actually show you what the calculation was that was used to get to a specific point and then drill down into the values until we get to the point where it’s the raw data that’s been retrieved from the data source.

32:19
Jaco
And the second part of Dylan’s question is, thanks for the question. Dylan Graycon by the way, he wants to know eventually if you can get that kind of drill down up to the sensor. Potentially, that is so potentially.

32:34
Lenny
Currently what we’re doing with flow is we do have something that we call a pass through chart. So the pass through chart is actually allowing you to chart data that’s coming from the historian level. So if that sensor data has been historized in the historian, we will be able to get to a point where we can actually show you the raw data before we as flow do the calculation or the aggregation of that data.

33:00
Jaco
All right. I think Dylan is aiming to understanding how eventually you can get the end to end story when it comes to something like maintenance, victim maintenance. If there was a failure as a surface at what’s kind of level and can you get that connection end to end?

33:14
Lenny
Correct. What a lot of people do is they also mix and match historical KPI information with real time information. The flow dashboarding is HTML five compatible. You can embed an iframe or another link to another component, link to it. So a lot of time what people also do is they take the normal scalar, historian solution, whatever it is, and they actually embed that raw data trending tools right there in the flow environment. So you’ve got your actual aggregate. See, it’s red, white’s red. Oh, I’ve got the actual access to my historian trending.

33:47
Jaco
Good question. Thanks, Dylan. I hope that’s a quick answer. Sorry we disrupted.

33:52
Rowan
That’s what it’s about. Yeah. As I was saying, this is really what the tiered approach is about. So you’ve got your tier one service at the bottom and your tier two at the top and all that data within a matter of minutes. Let me get onto what the problems, your problems resolved.

34:08
Jaco
Right.

34:08
Rowan
Okay. Data comes through in the same format, so it’s already nicely packaged because your tier one flow instances are doing that. And so that nicely packaged data is all standardized. I mentioned the word again standardized into your reporting server. So not much ETL needs to be done in that secondary layer. It’s all done for you. Extract, transform.

34:34
Jaco
We use a lot of tlas.

34:37
Rowan
TLA. WTF?

34:42
Jaco
Yeah.

34:42
Rowan
WTF?

34:44
Jaco
Yeah.

34:45
Rowan
So extract, transform, load.

34:46
Rowan
Right.

34:47
Rowan
It’s one of the biggest things that every business needs to do. Right. You need to get data from somewhere. You need to package it nicely, codify it, whatever it is, and then load it somewhere else. And the beautiful thing about this is that all of the packaging of your data, the transforming of it, is done at tier one. And so you don’t need to do much transforming it at your tier two layer. And that’s what I mentioned. Middle management no longer need to spend hours collating and transforming data. It’s right there for them, which is.

35:13
Jaco
Not the skill or the capability or the strength or the knowledge or the experience. That’s not the best thing for them to do. That actually frees them to do other things.

35:21
Rowan
That’s not their job. Their job is meant to be as a subject matter.

35:26
Jaco
Data crunching.

35:27
Rowan
Exactly. They need to be seeing data now and contacting the oats and saying, listen, andolo, or whoever it is. I mean, great job on XYZ, but you guys need to focus on this. Have you implemented this cop, which is a good operating practice. Have you implemented this to ensure that you improve that number? They don’t need to be spending weeks and weeks transforming that. It’s just not what they’re meant to do. And then data is replicated to zone within minutes. Huge. And this is, I think one of the biggest things is that we’ve now got these big dashboards that we’ve put up everywhere within our supply area. Supply being beer supplies. That’s all the departments I’m talking about. So it’s not marketing and everything, but supply of beer and production of beer. The big dashboards are available, and they’re constantly on cycle.

36:12
Rowan
So you can see. And let me actually get to that right now. That’s an example of one of the dashboards.

36:17
Lenny
Right?

36:17
Rowan
So we have a West Africa bop, which is a brewing operation unit. We view every single brewery within West Africa side by side. Once again, apples for apples, right? You’re comparing the same KPIs, and you can see right away where your big problems are, sort of like, okay, so your BTS efficiency at Gateway, which is a plant in Nigeria, it’s not where it should be for the week, because the weekly report. So not where it should be. Okay, cool. So the energy and environment and safety specialist will coordinate immediately. See that? No need for number crunching. It’s right there. Call them up and say, guys, what’s going on? What’s happening there? Do you need some assistance? How do we help you get that number better? So it really improves turnaround time on things. Rapidly improves turnaround time.

37:06
Lenny
Two quick things I know that you also put on some massive screens, even your cafeteria. How does it scroll through this dashboard? So these things are actually visible in the cafeteria at head office, where people sit and eat. Another thing that I want to point out is very interesting, that you guys extended the model not just to be manufacturing focus. There’s safety KPIs in here, there’s environmental KPIs. So it really encompassed the capability of manually entering data, getting data from multiple sources, to create one single. We spoke about Toyota types, but one single version of the truth, where all of that data sits. Just interesting to note that fact that it’s actually extending the traditional. Well, we think about energy reporting.

37:49
Rowan
But obviously safety. Right. It’s not pulling from a sense in the field. It’s not like there’s a sense that’s.

37:56
Jaco
Been so many engineers.

38:00
Rowan
What’s beautiful about flow, and especially at the tier one layers, you can choose as mix and match of manuals, database connections, historian connections, it all collated. And here it looks the same. It’s one single unified front. And so, yeah, this has helped us a lot. As I mentioned, turnaround time to resolve issues. The standardization at the lower levels of what people are reporting and how they’re reporting it. And it’s really mean. You trained up every single know. We had a delegate from all of Africa last year. We brought them out here because this is such a powerful tool. And I’m happy to report that a lot of the guys are using it extensively. They are creating their own dashboards now for your plant needs.

38:44
Jaco
It’s a tool. Absolutely. But to your point, it’s all about the methodology, the approach, the boulder on it, the model tech. It’s a tool that was a process that make people’s lives easier and better decision making.

38:57
Lenny
Correct.

38:58
Rowan
So I’ll just finish off this with what the future is.

39:02
Jaco
Big data.

39:03
Rowan
Yeah. So I know I said earlier not all data is valuable, but I also said this, that once you know, and once you understand your data and once you have validated, verified your data, you can get value out of sending it to an AI setup.

39:21
Jaco
But then again, understanding other problems that you’re trying to solve and where the data that you have can help you in solving.

39:27
Rowan
Exactly. Once again, I think I’m also joining. Don’t try and use data that’s not the right data to try and solve a problem just because if it makes sense. Okay, I’ve got all this data. It’s wonderful. Maybe I can use some of this to give me some gains, some efficiencies. Let’s go back to first principles, right? Is this really the data you need? If it’s not, well then let’s get the data you need to give you an improvement there. Just to finish off data value through visualization, we have central flow report servers, plenty of data that we can utilize. We’re looking at using graphical tools because numbers are nice, but when you display them graphically, that’s what people resonate with and respond to some reds, greens, some donut charts and everything. So flow admittedly does not need to be a power bio click sense.

40:14
Rowan
And I like this because the answer is to post data through maybe from flow, because it has the ability obviously to integrate outwards, right.

40:23
Lenny
And importantly, that data is not cleaned in context. There you go. It’s nice and a nice model and.

40:29
Jaco
You repurpose that for anything else potentially.

40:33
Lenny
And the big thing is it’s data that is coming from a manufacturing space that we understand. And when we give it to the it type of tools, it’s already contextualized.

40:43
Rowan
Contextualized, 100% context and stuff.

40:45
Jaco
So context is everything.

40:46
Rowan
Yeah, for sure. And we hand it to them and they can then go wild with it and use paint or whatever they want. And the last one is data to cloud with the right data. I emphasize again, standard web posting data, it is an MQTT consumer item and flow, we’re heavily looking at doing that.

41:08
Jaco
Another one of the sort of possibly unknown amazing benefits about NQTT and the broker architecture is that NQTT broker can basically inject data from wherever into whatever with the context from source, which is just absolutely amazing.

41:22
Rowan
We were actually just as maybe a bit of insight. So our Doris salam plant is seriously looking at this, at the know, how we get data is relevant once again. How we get data from Doris Salam to a potential AI firm or whatever that’s able to utilize that data and then try and help them get efficiencies through their water treatment plants because the water quality in derby is so close to the ocean, need a lot of salinity and a lot of hard materials, a lot of calciums, magnesiums, all the rest in the water, and that reeks have rock on the water treatment system. And so how do we get external help from serious experts that understand water quality?

42:05
Jaco
How do we send them?

42:07
Rowan
Yes, were seriously considering using flow to try and send this data, but once again, standardization around the philosophy and the protocol, which is MQTT universally accepted. And if someone can’t engage with MQTT, don’t have a chat with them because they don’t know what they’re on about, and that’s really cool. I suppose it guys, thank you very much.

42:29
Jaco
If there’s any questions, I think we’ll summarize. We’ll probably take, if it’s okay. If you don’t mind us sharing the slides as part of the closing office, I think we’ll share some of your slides and some of the key points. But I love that nice, quick, easy top three things to consider, think about, and implement.

42:48
Rowan
Go for it.

42:49
Jaco
Thank you very much, Jason. Yeah, sure.

42:51
Lenny
Thanks.

42:51
Jaco
All right, so we’re moving a little.

42:52
Lenny
Bit into now the product specific updates.

42:54
Jaco
It’s worried about our time a little bit.

42:56
Lenny
Worried about the time.

42:57
Jaco
No, it was valuable. I’m not sure if we’re going to be able to cover full demo. Maybe we should do that as a separate session.

43:04
Lenny
I will see what I can do. I just want to show one thing in the demo. All right, so obviously I’m going to cover flow a little bit. Just the new release that’s imminent. What I want to just focus for the people that doesn’t know flow, obviously from Rowan’s chat, flow is an information platform. We call it the information management plot platform. And it really is that ETL or that extract transform load tool that we can get from all these different sources. We discussed in the topic manual, data from IoT sources, as well as our typical OT SQL databases and historians. It contextualizes that data.

43:44
Lenny
It gives us that single version of the truth where we can standardize and as well as giving us that capability to track KPIs, very importantly, standardized KPIs that we can manage with our template functionality and really become this bridge between the OT layer and then the IT systems that can talk about it. Jaco I think at the end we’re going to talk about our next webinar unified platform kind of concept. And flow definitely plays a big role in that unified framework. And then obviously, at the end of the day, it’s all about decision support. So we need to be able to give the data to people to make decisions, but the right data in the right context, in the right format so they can make those decisions.

44:26
Lenny
Yes, Rowan spoke about obviously from the food and Bev kind of environment, but flow is a universal tool that can be applied to all of the different industries. Also very exciting from the Flow team is that they recently joined the Eclipse foundation. So that means that all three of our products that we’ve got in our stable canary, ignition and flow as part of the Eclipse foundation and utilizing heavily utilizing the Spotflip B protocol of MQTT as that open standard to be able to communicate with one another and really create this unified architecture that we’re speaking about.

45:02
Jaco
Many others are thinking about the Eclipse Foundation. IBM, I think IBM, and there’s even.

45:07
Lenny
A few end users. Chevron as an example.

45:12
Jaco
Everybody’s catching up.

45:14
Lenny
Yeah, definitely. All right, so last time that we spoke, we kind of alluded to what’s coming in. Flow. The flow 4.5 point dyslexia. Sorry guys, 5.4.0 version.

45:27
Jaco
I think just jock the. .0 if there is a 0.1, then it becomes really good. Then you can make it 5.1.

45:34
Lenny
So that will be at the end of June. And there’s some really cool things that we’ve added in there. We’ve added features to connect to influxDB, one of those no SQL kind of databases that we see in the IT space popping up quite a lot.

45:47
Jaco
Definitely a lot of chatter on influxdB.

45:49
Lenny
Definitely.

45:50
Rowan
Just as an aside, influxdb in South America is a massive. We only use influxdb.

45:56
Lenny
We enhance our model security. You might have some very delicate or sensitive KPIs that run cost that you want people to see the data. So we’ve enhanced our model security to cater for that from a usability perspective. Rowan spoke a lot about templates. He spoke about having a template for the measures and the KPIs. We’ve extended that to the charting as well. And I’ll quickly demo a few things around that. And we also obviously just included updates for net. And we also have additional retrieval types. In the past, flow could do as far as every five minutes, do an aggregation or a calculation. We’ve dropped that now that you can have 1 minute, two minute, three minutes just to get a little bit. So. And a lot of these things are coming from our customers. So we take our customer feedback very seriously.

46:50
Lenny
We try to be the Joneses not to keep up with them in this space. So we really work very hard to bring in all of these new features in there. So I really just very quickly, I know the time is a bit short, Jaco, but I really want to just demo one scenario within flow to really make the template or the reporting creation a lot simpler and easier. All right, so I’m going to move here. I’ve got a little bit of a flow scenario or model already built. I’ve got some templates that I’ve built as well. So for the guys that see me do this demo, you probably see me building the model manually by hand in a lot of my videos. What we’re going to promote heavily now with flow is template first configuration.

47:32
Lenny
And there’s a few things that we’ve added to make that very simple. First thing that we’ve added is we always had model attributes that you can associate attributes of a device, serial number, plant location, or equipment number to our metrics, but we’ve kind of extended it to the template functionality and we’re using that to do a lot of cool configurational items. So at any point in time, if I open up this metric for boiler, I can say, hey, Mr. Boiler, which area of the plant do you belong to? Which piece of equipment are you? What is your equipment number?

48:06
Lenny
And the reason why we add these attributes is if I look at my tags, that’s currently in my historian here, I would really like, if I have that standardization on the OT layer, you’ll notice that I’ve got a very cool equipment number here at the front. That’s the only thing that differentiates from boiler one, two, three and four is the number at the front. So why can’t I use that number to automatically go and populate my retrieval properties in flow to automatically get it from my story? And that’s exactly what we do. So you’ll notice that I can use these model attributes now in places like in the retrieval section. So I can use that prefix, add it to the front and flow will automatically be able to. Just then you deploy it, connect to that historian and get that data as well.

48:57
Lenny
I can even use that as an example here at the top to define what is the actual name will instantiate this guy. It’s going to give it in the model. All right, so let’s do that very quickly. The same thing that we’ve done for the templates, we are doing for the reporting as well. So I’ve got a little bit of a report here. Currently this report is completely blank. You’ll notice if I go to my report here. There it is. There’s nothing in it. So what I can actually do now is I can go and populate this via my template. So in the past I had to drag it from the model.

49:32
Lenny
What I can actually do now is I can actually go from the template and I can actually say, hey, go and populate my sections based on my boiler templates that I’m going to instantiate and show me the pressure, the steam and the temperature from that point as well. So I’m configuring the report from the template, not the other way around, not the actual model that I’ve already configured. So I’ve done that other cool thing is I can actually use these model attributes in here as well. So this is a comparison table. So I’d like to compare my different plants with one another. So I actually go from here, go to my model attributes and actually pull them. Hey, please go and show me the plants first, then go from a template perspective and show me all the boilers.

50:15
Rowan
Right?

50:16
Lenny
So I’ve got that capability. And I would like to compare each and every one of these boilers. I would like to compare the efficiency numbers with one another. That’s it. All right, so that’s all I have to configure on the report. All that’s left for me to do is actually instantiate this guy, right? So I take the plant, drag it into my model here, and now it’s going to ask me a whole bunch of things. All right? It’s going to ask me, what area is this? So this is part of my utilities. All right, equipment, remember that is that number that I need to get from the historian. So I know my equipment goes from. I’m just going to do ten plants for now. So I go ten to 50. All right, so 1020. Sorry, 15.

51:00
Rowan
Can I ask you, Lenny, the actual number was zero 10, correct. Is that zero just a space? But you’re not need to type in zero 10 to don’t.

51:10
Lenny
Because what I’m telling it here is to step it 10, 20, 50 by ten. And in the formatting, I can say padded with a zero. Okay, perfect. All right, plant that’s going to be plant. That’s going to be plant. One, two, five. And again, I can pad it with two zeros. Plant number again, one, two, five. I can go and pad it with two zeros there. And I can give it a plant prefix in this case, again, one, two, five. And I’m going to give it a prefix here of Pl in the formatting and also change my border number. One, two, five. There again, give it two zeros. Perfect. All right, so if I look at the preview here, it will tell me how many plants I’m going to instantiate with. Then the number.

52:02
Rowan
This preview is a winner, and obviously.

52:05
Lenny
I want five of them, right? So if I press the preview now, it’s going to redo it and automatically populate if I do six. Obviously I didn’t specify six of them, so if I do six, it will just tell me, hey, there’s something wrong. It’s going to create a copy, so that’s obviously wrong. Let’s do that. Hit the preview, press continue, and I am going to instantiate all five of my plants, deploy them out, and it’s going to go and build my model for me. All right, so let’s deploy these guys. And because I used the template from a reporting perspective, I refresh this guy. It’s going to take all of that into consideration and automatically add the different plants, all the different sections for all of those boilers that I’ve created.

52:52
Lenny
Flow is already populating it in the back end, and if the data gets deployed, it will start filling it out.

53:02
Rowan
How do I push that report template down?

53:05
Lenny
All right, so exactly the same as what we have. We’re going to have the capability to configure template server, and that report is now going to be a reporting server. We’ve also included copy, paste, so you can copy and paste from two different flow instances.

53:21
Rowan
And if you’ve got a scenario where we have, we’ve got a template that’s.

53:25
Lenny
Going to serve that purpose. Cool. Other thing, very quickly, as we mentioned, influxdb. So we’ve created an influxdb data source so we can obviously browse that namespace from influx, get the different tags that’s configured. I’ve got a little influx here. There’s the data influx. Influx got the standard of being able to add filters and tags and field devices to that. Or you can have the capability to write your own kind of scripts within influx depending on what you want to filter and how you want to do it. We cater for both of that in flow. So I’ve got in my model here, influx tags that I’ve already created. So we will pull that data from influx and you can then have a decision on what the tags is that you want to configure.

54:09
Lenny
So here I’ve got the tags either being available in the drop down or for the guys that’s hardcore and they want to build their own scripts, you can go and pretty much write your own influx query scripts as well. And yeah, flow will be able to populate that. And if I go to my little influxdb dashboard here’s the data that we pulled from influx aggregated. And then inside of flow, if I click, and again, this one, I’ve utilized our minutely new one minutely aggregation buckets. So we’ve got that minutely data available inside of the flow reporting as well. Cool. So that is pretty much the aggregates of that data in a one minutely bucket from influx. Cool. I know we’re running a bit out of time, so that’s unfortunately all I can demo for the new versions for now.

55:05
Lenny
But we’ll share a little bit more in depth demo.

55:10
Jaco
And sort of expected release thing. End of June. End of June, yeah. Fantastic. Cool.

55:16
Lenny
Other thing that also came out from the flow side is we had our Oee KPI series that we had. That video is now live on the Flow website. We also configured and created a whole bunch show of how to guides. This is something new that we will promote from element eight side. Big thing for us is obviously security. So we’ve created a guide on how to bind certificates to flow. If you want to use SSL communication between the flow components. We created a guide on how to filter out bad attribute values by utilizing nice custom expressions. And we also created a guide, just get this, quite a lot, people migrating to new server architectures, moving flow DBS across, et cetera. And what is the process of actually doing that?

55:59
Jaco
That’s helpful, by the way, if you are concerned that you missed something, we are recording the session. We will show the recording afterwards as well as links to all the relevant docs and things we’ll be talking about. Perfect. All right.

56:13
Lenny
And with that, I’m going to hand over to Laura.

56:15
Jaco
Thank you, Laura. What’s happening in the world of Canary?

56:17
Laura
Yeah, please have this. Thank you very much. Okay, so for all the canary lovers out there, I will be talking about our how to guides. Just like Lenny explained a little bit of the 21.2 features.

56:30
Jaco
That’s the latest version.

56:31
Laura
Yes, the latest version and then the newest Canary community. So starting off with our tech notes or our how to guides. So as you can see, we have actually created a lot of them for Canary. Specifically, they are a few guides on how to utilize Canary tooth for potential course. So we have five for Canary so far that you can find on our website under our how to’s resource tab. And you can actually find all of the technes for Canary ignition and flow.

57:04
Lenny
And obviously just quick on this, if you’ve got any ideas for tech notes or something that you’re struggling with, please email us at support at element eight. A lot of these tech notes are driven from things that we do see a lot on tech support, so hopefully this will make the lives easier for users.

57:20
Jaco
These are pretty much the ones we thought would be most practical and most useful.

57:23
Laura
Yeah, so these are specifically ones that a lot of people have asked us about and log support tickets. So I know some of them look fairly easy. But like, for example, let’s take licensing the canary system or even connecting to an OPC Uada server. That’s usually for the newer users that would like a very quick how, not.

57:42
Jaco
Complicated, but there’s a couple of steps.

57:45
Laura
And it’s very nice. We have a lot of screenshots, step by step guides, explanations, definitions.

57:51
Jaco
I like the data trends in Wincc. That’s quite a requested one.

57:55
Laura
So that is the newest one. That’s nice. So what we do is we just show you how to create, actually drag in your canary data trends into a WintC project.

58:08
Jaco
Fantastic.

58:09
Laura
In the graphic design. So if you maybe want to check it out, please feel free. Fantastic.

58:14
Rowan
Yeah.

58:15
Laura
So I’m not going to hammer too much on that, because ask one quick.

58:19
Rowan
Question with those links to all the how to guys. If you get a new version of Canary, right, are you going to update those specific ones?

58:29
Jaco
You’ll notice that for all of the harder guys, we version them with an update version, obviously update.

58:38
Laura
So most of these have. It stays the same from a few versions back as well. Nonhouse license.

58:46
Lenny
It’s very relevant.

58:48
Jaco
Otherwise you want to make sure that when you use a hard to guide that the latest.

58:52
Rowan
I think the key thing is you send out a link to people and you’re like, this is how you do it. And they upgrade three times and they go the link and it’s pointless.

58:59
Laura
So that’s the whole idea, is to keep people updated constantly. And then we have our new Canary system 21.2 that was released in May. And of course it’s currently available. We have a lot of new features. Canary has a lot of new features. I’m just going to talk about a little bit. Let’s say there’s an addition of a new data entry control for simple manual data entry and a bar chart control for quickly creating aggregated data bar charts. In axiom, which is nice, they have an addition of a rotation property on a label control. In axiom, the addition of inactive timeout info to URL parameters. That’s important, which is great.

59:41
Lenny
People leave their yards open. The license, obviously you wanted to release.

59:46
Laura
So yeah, timeout, yeah, those were very nice.

59:49
Jaco
Also requested feature.

59:52
Laura
Then there’s the addition of tag properties, views, asset types, asset instances and asset tag tables in the ODBC connector. There’s a lot more, sure. But I’m going to ask you guys to please go and read up on it. I’ll provide the link in the next slide then. For usability, we have improved handing of duplicate certificate areas when loading needed certificate added to several services in our admin tool. We have improved efficiency when we are starting or stopping a large number of calculations in the calculation server. And then they have added multiple abilities to the existing data collectors. Like I know this, for every single data collector there’s something new. For example, the ability to store manual data entries like metadata in our sender and receiver.

01:00:41
Jaco
And of course canary, also part of Eclipse foundation, part of the Tahoe working group with spot b. So MQTT also becoming very popular connector.

01:00:50
Laura
Yeah, and then some of the fixes is like deleting lost connection groups that crashed the admin. If more than three groups were deleted at a time. Yeah. The calculator trends could have caused issues with the valve delta and zoom cursor. So that has been fixed. And too many threads have been created when too many calculations were being back full simultaneously. Like I said, there is much more.

01:01:13
Jaco
And the availability of the upgrade. There we go.

01:01:16
Laura
Yes guys, you can please go and read up in the documentation and the release notes. And we also recommend, like we say, you guys check it up every single month because there are new things that is valuable. And then lastly, our canary lab. So they have created this amazing friendly online help guide called our Canary community. So this was created with the sole purpose for users to ask questions, solve problems, provide feedback, and then users can also actually connect with the Canary team members themselves. So no more emailing support tickets, you can just open a support ticket on the support center and then also check your status there as well.

01:01:55
Lenny
And they’re also moving the knowledge base.

01:01:56
Laura
Into this as well. Yes. That’s awesome. So you’ll see we have a feedback for knowledge base, learn canary and version info options there at the top. So the feedback is everything where you can log bugs and issues, discuss enhancements, and you can also request any new ideas that you have for future features that you would like to tell them about. Forum enables you to share screenshots of your axiom dashboard as well. If you’re really a show off and you would like to show them what you can do. And then the knowledge base, like Lenny said, it includes everything from the quick Start guide, the system admin duties, and also all of the client tools are discussed in detail.

01:02:40
Jaco
Amazing. It’ll be nice to see some of the south african people get onto the community. It’s such a valuable and proactive community, very similar to inductive admissions community. It’s very often when you post a question, for example, you get a reply within less than an hour, and very often it’s not from inductive automatic, it’s actually from another system integrator or another user elsewhere. So these communities are very valuable.

01:03:04
Laura
Yeah. And what’s nice is what I like about the Learn canary, sorry. As everyone knows, they have the Canary Academy, but in the learn canary, they actually added training, webinars, canary bootcamp material, as well as the Canary Academy. So there’s a lot more resources there. And then version info. Like we said, that’s where you can find pretty much everything.

01:03:25
Jaco
Join the community, have a look at the latest, take notes, read up on the latest release.

01:03:29
Laura
Yes, please.

01:03:30
Lenny
Thank you.

01:03:34
Rowan
Laura. I am going to, if I get this to change, have a quick chat to you about your ignition, how to guide pricing, as well as the new features coming up in 8116, or now called 8117. I’ll explain that. And our new exchange resource.

01:03:55
Lenny
Right.

01:03:56
Rowan
Okay, so what I wanted touch on is how to spec your server. So this is a question. Or your device? This is a question that we often find issues around, because maybe a server was overspecked or under specked. Over specking is mostly not a problem. Underspecking is usually the problem. By default, we have three kind of specking groups. Which is your smaller one, which would be your arm type of devices, your raspberry PI’s, they could have your one. Usually you would spec it your gateway with one gig of ram, up to four gigs of ram, depending on how much you have. Right. Or how much resources you have. So usually that will range from one connecting to one to two devices. So that’s the kind of device you would want to use an ArM architecture for.

01:04:54
Lenny
Right.

01:04:54
Rowan
And then your default usually would be your medium, which ranges from a pc that has. Or a server that has four. Pause. Up to four pause. At 3 ghz. Up to four cores at 4 ghz. Yeah. Very light.

01:05:11
Jaco
Very light.

01:05:12
Rowan
And the difference being, obviously, how big your deployment is. And then lastly, we would have your large enterprise type deployment, where you would have your acres to acres, 216 cores, and with 32 gigs of ram, all of them we recommend you run on ssds. You can see kelly there. There’s no he HDD. Please don’t do that to yourself.

01:05:40
Jaco
Maybe this is your summary of the processor types, but the guide itself actually talks through the steps.

01:05:49
Lenny
Yeah. So the guide includes, obviously, how do they get to medium and large? So what is the attack count? What is the client count, what is the device count? Obviously, the arm based stuff will utilize for edge. So that’s for the HMI and more IoT kind of applications. But yes, the guide is very helpful, determining what the resources is. And there’s another guide.

01:06:12
Rowan
Yes, for your historian, which is something that’s neglected because we’ve sorted out the gateway and we think the historian is happy. Remember coming back, your historian, you have a choice to use any SQL historian as long as it’s standard SQL. So that could be your Microsoft SQL, the standard, or the MySQL server. Usually a small deployment would look like two cores, 2gb of memory and that would work best for zero to 100 value changes per second. So that’s the kind of performance you would expect and up to obviously spec up, double the ram and you would have up to 500 values per second.

01:06:51
Jaco
That you could process.

01:06:52
Lenny
Right?

01:06:53
Rowan
And then we could have your medium. We recommend that you use four cores, four gigs, and that will give you between 502,500 values value changes per second in performance. Obviously if you then need to spec up without needing to go to the higher or spec server or you’re still in your medium deployment, you could then bump the memory to eight gigs and then that would give you up to 5000 value changes per second. Something I want to highlight is that depends also on your historian. So that depends on the tuning of the historian that you’re using. So if you’re using MySQL, for example, with tuning you could get it up to 10,000. So you have some leeway depending on how you set up. But typically the Microsoft SQL will give you up to 30,000.

01:07:48
Rowan
So depending on what your capabilities are, obviously you could have annual resources for the SQL tuning.

01:07:55
Jaco
Basically the setup and the configuration plays. Obviously that’s also in the guide and.

01:08:05
Lenny
They’Re also benchmarking against the different SQL technologies. They are equal as Tevia mentioned. So it depends if you’re using postgres so that benchmarking is also in the guide.

01:08:17
Jaco
Yes, nice guide.

01:08:18
Rowan
Okay, and what we all came here for, what’s new in the latest release, which is one 8117, if you have already bit the bullet and installed, you might have seen you had the option to download 8116 which is eight, one seven that was just resended because of but fixes, but they have been fixed and now the official download is seven release. Okay, so in that release we have some few features that have been added. We have scheduled scripting, now we have perspective updates. So there’s lots of updates coming to perspective. So in terms of your performance as well. And then OPC UA drivers now come with diagnostics. So you have a few tags that tell you what’s happening with the OPC driver itself. So you can keep track of that.

01:09:13
Rowan
Right, we do have a few more changes in terms of the usability or what the developers and end users are looking at. So new icon UI for managing your icons obviously makes it easier for you. Creature comfort. We do have improved speed in the identity provider so on authentication, so if you’ve logged in before it will not take as long to log in again. We also have some changes in our gateway auditing our OPC UA fixes as well. We have changes in terms of how the passing works in OPC and then we also fix the major issue with our upgrade. So when customers would upgrade from 79 to 8.1, we would have issues with the tag import XML that has been fixed. And we have lots and lots of changes in terms of the web visualization.

01:10:06
Rowan
So this would be your respective web visualization fixes which also now we have touch screen support as well. If that’s something you’ve been waiting for, now we have it for you. In terms of those guys who are using the 7.9 release, the last LTS will be 7.9.18 so be on the lookout for that. After that point you would have to then go to your 81 release.

01:10:38
Jaco
Cool. So LTS is obviously the long term support. 7.9 was the previous or the last. 8.1 is the most current.

01:10:44
Lenny
Yeah, this is the last maintenance release. So it doesn’t mean that it will still be supported from a tech support perspective, but there will be no more development done on 7.9. So at some point you probably just need to think about your upgrade strategies.

01:11:00
Jaco
To 8.1 and famously inductive automation. Ton of regular updates. In fact, it’s probably the only company that I’m aware of in our industry that actually has a nightly version that you can download, which is the work done today by the devs for that day. It’s absolutely phenomenal. You can play around with the latest version that was pressed the save by some developer somewhere, really crank them out.

01:11:25
Rowan
Okay. And in terms of what we have for your high two guys, we have a tech note for you for actually sending your alarm notifications through Telegram. Telegram. For those of you who do not know, is basically what you would be the competitor to your WhatsApp.

01:11:43
Jaco
That’s what everybody wanted to use when WhatsApp were going to share all the information with everyone.

01:11:48
Rowan
That’s what I’m using. We go where the changes. Okay, so the good thing about this, obviously it’s on your phone, secure and a familiar interface just like your WhatsApp. So you can read up on that tech note on our website as well and our pricing. Lenny, if you can give us a click there. I don’t know if we have the time. There we go. So these are our three main packages for our ignition, right. We have our basic which will have as usual the standard unlimited tag, unlimited, your historian, as well as your core modules that come or core drivers that come with ignition.

01:12:36
Lenny
Right.

01:12:37
Rowan
And then we have our ignition Pro which adds to what we have on the ignition basic. And then in this case we’re going to add the reporting module as well as Siphon Bridge as well as the alarming notification and as well. Now on our unlimited would basically be pro with the rest of the SMS and voice notification services, your sequential.

01:13:06
Lenny
And.

01:13:06
Rowan
We have OPC, DA and HTA modules coming standard with that as well.

01:13:12
Jaco
So the reason we designed this is obviously the pricing is public visible on the website, but it’s in dollar. So we wanted to create a local version of Ran version of that view starting with the packages. And the packages you’ll see is all obviously unlimited clients, unlimited tags, unlimited designs, unlimited connections. It’s just grouped nicely together in terms of typical modules used.

01:13:37
Rowan
And then if you feel like those packages don’t work quite well for you can actually build on top of or build your own package depending on your needs. We have the platform which would be the basic, would have the unlimited tags, unlimited design outlines and your default drivers that come with the IA managed. Those are the IA managed drivers.

01:14:07
Lenny
Right.

01:14:07
Rowan
Then on top of that you could then go and individually pick and choose your modules that you want to add on top. Building your own custom solution instead of going with the packages.

01:14:16
Jaco
Let us know if this is useful. This is a request we had to get. Just a quick snapshot of rand sort of pricing. It is obviously based on a rate of exchange of 14 rents per dollar. But yeah, hopefully this is useful to give you a quick idea of if you’re trying to price a project or a specific site.

01:14:33
Lenny
Yeah.

01:14:36
Rowan
And last but not least, our updates and documents. Right. We have our documentation that’s available on our website. Once you have the slide you’ll be able to click on those links. You’ll see this underlining on those so that it will point you to the relevant documentation. But I wanted to highlight our ignition exchange. Let me go ahead there. From here we have a community where they are basically different modules, different resources you can download and import into your ignition project for use.

01:15:12
Lenny
Right.

01:15:13
Rowan
For example, we have the IMF sensor, UDT and perspective template.

01:15:17
Jaco
That’s really cool. Yes, that’s really cool. If you’re not familiar with the ryolink is, by the way, you need to.

01:15:23
Lenny
Be speaking about MQTT and all that iolink again. Create open protocol again, a little bit lower down on the sensor level. They’ve created these UDTs and templates to actually query the data with the iolink protocol, the resources there, it’s for free. Download it from the exchange, pull it into your project and you literally can have an IFM sensor data with that. Also, one thing we also want to just highlight is obviously the Siemens open library objects.

01:15:57
Jaco
Yeah, it’s also biggie for us.

01:16:00
Rowan
So that will work with your Siemens.

01:16:02
Lenny
Open library objects in the PlC. Again, you can go and download it, have a look at it, make changes if you want to, but it’s a very good starting point for those objects as well. We really would like you guys to look at the exchange regularly. There’s some great features and stuff that.

01:16:19
Jaco
Gets 155, it’s been added now. Project was a handful. Great. Amazing.

01:16:28
Lenny
Cool.

01:16:30
Rowan
That is it for me. If you have questions, you can always post them in the chat.

01:16:35
Jaco
Absolutely. Let us know if there’s any questions on any of the ignition updates right then, just to close it off. I know we are way over time. I hope it was valuable at least if you’re going to look at the next time that we’re going to get together or way to engage with us, or where to speak to us or where we can help you. Here are a couple of things we alluded to some of the work that’s been done at Brewery of the future. If you haven’t listened to the podcast with Christopher, Chris Clark from ABM Bev yet, very philosophical kind of a podcast, but just a really great chat about the value of people, I suppose, and how to empower people as part of your operations.

01:17:11
Jaco
And Chris talks through some of the work that ABM has done around that notion or that concept of building out your business around those key people. Really great podcast with Chris. Then we spoke about the IoT certification that is done through Siriuslink. You will learn all about MQTT, all about the value of edge to enterprise data ops, all of the words around IoT. You’ll learn as part of that certification. We’ll send you the link for that one as well. Please don’t forget about the training. It is classroom based, but you could also attend it virtually. We always prefer to meet with you face to face, but obviously given some of the precaution around Covid at the moment, we do also host them virtually.

01:17:54
Jaco
We’ll just share a couple of dates for the upcoming courses and then two webinars that will be potentially valuable to you. Hopeful, hopefully. The first one is choosing a SCADA system for the IIoT era. That is not us presenting that one. That is an inductive automation one. On Wednesday, the 30 June they talk about through some of the considerations and things to keep in mind when designing and choosing a software or a tool set for IoT but scatter specific. And then the one that Lenny and I will talk through with a guest, which we still have to get confirmation on Thursday, the end of July, 29 July is where’s my data? Data ops in the unified namespace talk a little bit about what also the Rowan alluded to earlier is what is the unified namespace? Is it a thing?

01:18:43
Jaco
Is it an idea, is it a notion? Is it a product? Is it a dvd that somebody drops with you? No, it’s not. But why is the unified namespace so important in terms of bridging that itot gap? So that’s a couple of things that you can look forward to over the next little while. If you’re still with us, you haven’t gone to sleep or jumped off or passed out. Thank you very much for listening. We hope it was informative. We have time for some questions. For those that are still online, I think we have a question from Dylan. Dylan wanted to know, would ignition be a good choice for linking Siemens Plc data to SAP? ERP system client wants to automate maintenance work orders and confirmation of maintenance actually carried out end to end.

01:19:31
Lenny
So yes, definitely there is business connectors that’s available. Ignition also has the capability to post, obviously data through a post or put request into a web request if that’s the requirement. One thing we just need to remember is what type of data do we need to push to ERP? Is it the raw data or do we need to contextualize that data before sending it to an ERP? Is it the running hours for the week or the running hours for the month? If we think about proactive maintenance, maybe from a scheduling perspective. So just be cognizant of what we need to send. Potentially there’s an aggregation layer that needs to happen. But yeah, there’s business connector tools. There’s one that’s for SAP that comes from the Separsoft stable. Obviously that integrates with the tag within ignition. So definitely can integrate to the ERP’s world.

01:20:25
Jaco
Definitely. Good question, Dylan. Thank you. And I think there’s also maybe another potential discussion around if you are ready, for example, using, I don’t know, Wincc pcs seven, there’s obviously some considerations there you wouldn’t necessarily have to rip and replace to do that mutually or in parallel with what you’re already doing from a control perspective, you could very easily just put ignition on top of that.

01:20:46
Lenny
Layer just for that functionality.

01:20:48
Jaco
Good question. Dylan’s on fire today. I think we need to send.

01:20:53
Lenny
Cool.

01:20:54
Jaco
If there’s no other questions, obviously you’re welcome to send us any questions that you have on email when we share the link to the presentation as well as all the other bits. But thank you very much for joining us. You could have been doing a hundred other things, but you spent the time with us, so thank you for that. Thank you to Laura for the update on Canary Tebe ignition. Lenny Flo, really exciting stuff on the way. And Rowan is always just awesome, always chatting with you.

01:21:18
Rowan
What a privilege.

01:21:19
Jaco
I know it’s not always easy to get your time because you’re typically all over the country, but always amazing to get your insights and feedback and some of the really good work that you and the team are doing at AbMdev. Some groundbreaking stuff. Thank you. Cool. That’s it. Thank you very much. We’ll share the presentation. Thank you for your time. And we will see you, if not on the 29 July, we will see you at the end of August.

01:21:46
Lenny
Perfect.

01:21:47
Jaco
End of September. Sorry, end of September. For our next quarterly update. Thank you very much.

01:21:51
Lenny
Thank you. Cheers, everybody. Bye.

You might also like