Search
Close this search box.
Build Your Digital Infrastructure. Register for the ELEV8 2024 Event Today.
By Clarise Rautenbach
01 June 2020

How To Collect And Store Edge Data With The Canary Historian

Share

Watch now or bookmark this page.

Transcript

00:00
Lenny
You okay? I think. Let’s get started here this afternoon. Well, thanks a lot, guys, for joining us. Today is the second part of our three part webinar series, and I’m going to talk a little bit about how we can get the edge data that we collected from our real time sources last week into the canary data historian and really do some cool analytics and capability and showcase a little bit of the capability of that Canary historian. Don’t be alarmed if you missed out last week’s webinar. I’ll explain everything again in detail so you don’t have to feel that you’re going to be lost in this session. I’m literally just going to carry on from where I left off. But it will be that you can follow from what I’m actually going to show here today.

00:55
Lenny
So don’t be alarmed if you missed out on last week. Just before I get going, I just want to introduce the panel that we’ve got here today. Got myself Lenny. I’m going to drive a little bit of the demo that we have here, showing you the capability of the Canary historian. We have Clarice, our marketing manager. Clarice, again, I feel almost like a broken telephone. But every week, thank you so much for all the hard work you put into these webinars. Getting them out, sorting them out for us, getting the registrations done. Your work is really appreciated from that perspective. And then Jaco Markwat, the md of element eight, thank you as well for joining us on the panel here today.

01:37
Lenny
Now, before I’m going to go in and talk a little bit about the canary historian, I’m just going to hand over to Jaco for. Just give us a very brief introduction into element eight.

01:52
Jaco
Cool. Thanks, Lenny. And absolutely, thank you, Clarice. We’ve had a phenomenal response to these weekly webinars, and the feedback has been really positive. So thanks to you both. And then, good afternoon and welcome to everyone on the call. Thank you for joining us. I recognize many of the names that I can see that’s currently with us. For those of you that we had not yet met, welcome to our community. We hope that you find this time with us this afternoon informative and valuable. So, elementators is a new business and really was born from the need to provide best in class solutions to the south african community. And today, we are the proud authorized distributor for ignition, SCADA, Canary historian, and the flow information platform. So ignition needs no introduction. I hope it doesn’t need any introduction.

02:46
Jaco
It is, of course, the unlimited platform for SCADA and so much more canary that we’re spending a little bit of time on today is a flexible and best in class time series database and of course the information platform flow which helps you make quality decisions more often. And really what we believe is that we have a best of breed technology stack with these three offerings that offers a nonsense, bespoke and unlimited licensing model. Really what were looking to provide were cost effective and flexible solutions without some of the complexity that we have seen in our industry and in our market. A big focus for us is of course our system integrated channel.

03:29
Jaco
The community has been very receptive of our offering and our engagement, but I think our system integrated channel is really our strength and where a lot of our existing relationships sit and lie with many of the people on the call. We are a channel focused business. We want to make sure that we enable our system integrators to provide the best level of support and ultimately help you, our customers, to enable you to do the most with these solutions. Support will be provided by us obviously on these products and solutions. For the first time in country in South Africa. We’re very excited about that. We have a team in place to help you with any queries or technical issues that you may have, as well as any specific architecture type, sales engagement, sales engineering questions that you may have training.

04:20
Jaco
At the moment, we are very fortunate in that we are seeing some really good virtual online training platforms already rolled out. I think on the ignition side with inductive university, probably one of the most incredible online platforms for learning that we’ve come across. And it’s been really encouraging to see so many of you already signing up on inductive university and working through the certifications. So thank you for that. Likewise, we’re building out the virtual training for canary as well as flow. Some of that was a little bit forced on us with Covid-19 and some of the restrictions that we have around movement and being able to host you in our office. But as soon as that is lifted, we will of course have training available at our office in four ways. We look forward to seeing you there and hosting you there.

05:10
Jaco
And over time, as we understand how and when the lockdown will progress and open up a little bit more, we’ll look at extending that to other areas as well. But that’s really me. Lenny, thank you very much. I do want to say that we need your feedback. We would love your feedback. Again, the response to these weekly sessions have been very positive and very encouraging, but we’re looking forward to you sharing some of your topics or content that you are interested in. We can, of course, cover quite a wide range of topics. That’s valuable, hopefully, to you. So give us a shot and let us know what kind of topics and content you would like to see on a weekly basis. We really enjoy this little bit of time that we have with you on a weekly basis.

05:56
Jaco
We hope that you enjoy it, too, and let us know if there’s something else that you would like us to cover. So thank you again for joining us. Thanks, Lenny. And thanks, Clarice.

06:07
Lenny
Perfect. Thanks, Jaku. All right, so let’s get started here today. As we mentioned, this is the second part of our three part series, and I’m going to talk a little bit about the Canary Time series industrial database. So I’m going to just quickly have a few slides. We are just going to go over the canary system in the PowerPoint just to explain what is the different components that make up the canary system. And then what I will do is I’ll carry up from last week where I’ve got my data now in my real time ScaDA system, and I’ll show you guys how I can extend that data to actually get written into the Canary historian and to utilize its front end and client tools to actually do a little bit of analyses on that data.

06:55
Lenny
At the end, I’ll just do a very quick pricing review just to show you guys how the Canary story is priced, and there will be some time for some Q A at the end of the session as well. But please feel free during any part of this webinar. If you’ve got a question, submit these questions so long into the Q A section of the webinar, and we can then cater for them at the end of the session. Right. So what does the canary system at the end give you? Well, the canary system gives you the power to actually transform your teams by using all of your process data that you might encounter on your industrial manufacturing sites. Now, this process data, it can be a lot, and it can seems that it’s overwhelming, but it doesn’t have to be like that.

07:48
Lenny
Maybe your current solutions that you have is making you feel like you’re stuck and you struggle to get a lot of this process data out to make the decisions. But hopefully, what we can do from a Canary perspective is give your talented women and men that’s working in your organization the tools to push them forward and to enable them to create these automated workflow on top of their processed data. From a data management perspective. Now, there might be a whole bunch of reasons what is currently holding you back on that journey? It might feel that, well, my current solution is database. It’s a database solution. I need special database management skills to actually manage it, but it can be a little bit more to that.

08:32
Lenny
It can also be the fact that we’re now seeing this massive convergence between the IT and the OT space, and you might be responsible for the operational side and the production information that you’ve got. But now, all of a sudden, and we see, typically in our industry, we’re becoming these master or jack of all trades kind of scenarios, where now you have to also worry about the security from an IT perspective. And you hear all of these three letter acronyms about PLS and SSL and Htps, and now you have to cater for that as well. And hopefully, we can have an ease into from that kind of perspective with giving you the capabilities from the canary story. Obviously, very important is to get user buy in.

09:16
Lenny
You might have a very broad team and you need to obviously, cater for each and everyone’s needs from this perspective. So that can also be a challenge. And I think, even more than in the past, Roi and affordability is coming into play as well. We’re talking about data transformation, we’re talking about industry 4.0. We heard about all the horror stories about how difficult this digitization jury can be to navigate and to get implemented. But hopefully, you’ll see with the canary story and that we can help and make that process a lot easier and a lot more affordable to get actually going. So we’re here to help. We’re here to guide you. The Canary historian is. It might be a new word in the south african market, but it’s been around for about 35 years.

10:02
Lenny
They’ve got around 18,000 site installations globally, and that caters to a range of industries, so over 20 different industry worldwide. So it is a tried and tested historian that’s been quite a long time in the industrial market space. Who are some of the customers? So you’ll notice some big names on the slide here. I don’t know if you guys saw the SpaceX launch last night, got suspended due to weather. I’ve just seen the NASA logo there. Hopefully, we’ll get a successful launch on Saturday. So that’s for a little bit of feedback. That’s the first manned launch from american soil in about ten years that they’re going to do with the new Falcon nine rocket. That was quite interesting. Just a little bit of sidetrack there.

10:49
Lenny
But I think what this slide shows us here is it doesn’t matter how large, how big, how complex your client’s requirement might be, everybody has got the same core values of what they want to get and do with their data. So we can really cater for the biggest as well as the smallest from that perspective as well. Now, we hear this all the time. The first thing that we hear, we engineers, we buy historian, and obviously we like to trial and to test this thing. So we say, Mr. Client, what do you want to store? I don’t know. How much tags do I’ve got available from a licensing perspective? Okay, cool, let’s historize everything. So it’s a typical thing that we see is that we want to historize everything.

11:34
Lenny
We want to be cater for the future, we want to cater for growth, and we don’t want to be stuck into, oh, I can only historize a few amount of tags. The other thing that we’re realizing is that with the whole IoT birth and devices getting smarter and more sophisticated, we’re seeing that data that potentially, in the past, took us every 15 minutes from a telemetry unit to actually get a data point to historize that, well, now, it’s not uncommon to say, well, I want to store my data every second, and the devices in the field are getting smarter, and it was able to deliver these data points every second. So we get this kind of scenario where faster is better, and we need to be able to store these raw values faster and faster into our historians.

12:20
Lenny
And obviously, we need to be able to get that data out. So everybody in your team should be able to connect and to get the data out that they require. And they must be able to do it on the self. They should not be old ransom by the vendor to give them access to their own data that’s been stored in the historia. And obviously, very important, you can’t do much if you’ve got the data without interpreting the data and acting on it. So client tools to actually use on top of that data is very important. You must have the right tools for you to make quick decisions if you’re going to make successful decisions based off all of this mass data that potentially is locked inside of your manufacturing environment. Now, the canary system has all the necessary software to address those points.

13:11
Lenny
We can definitely store large quantities of data at a very high resolution. We allow for secure data access to this data, and it also has the client tools to help you drive and propel your self served analytic needs from that perspective. And I’m going to show you the process that we go through to actually set it up, and it’s very simple. And we’ve kind of made it a very simple three step process. And you’ll see that as well when I do the demo on how simple that process is. Now, the process is very simple. First of all, you need to be able to collect and store your data. Then you need to be able to assign context to your data to make it to act faster on all of this amount of data that you’ve got.

13:55
Lenny
And then obviously you need to be able to maximize your operation by interpreting the data utilizing the client tools that we put on top of it. The first thing we need to do is we need to collect the data. Now, Canary’s got a few ways of collecting the data. They call it Canary collectors. And these collectors can be a range of things. It can get data from the MQTT Spark plug B standard. It could be normal OPC, UA and DA communications that we are used to in our field. It could be other SCaDA systems, it could be databases, CSV files, and potentially you can even write your own little connector by using their web and. Net APIs.

14:35
Lenny
And what you do is if we look at a kind of a typical architecture that we’ve got, is you will go and connect obviously to your plcs and devices. Now, you might already have an OPC server or an NQTT server installed onto your PLC network to obviously drive your scalar system. Now, the Canary collector, we would go and install as close as possible to the source. So we would normally run that collector on top of the same server where your OPC server or MQTT broker is. But don’t be alarmed. Obviously, the Canary historian doesn’t write back to the plcs or everything. It’s literally a collection mechanism. So there’s no impact on your OPC or on your plcs from a networking perspective by doing that. So very typical architecture would be as you see it on the screen there.

15:31
Lenny
The second thing that we will then obviously deploy is the physical or the actual Canary storian. And that’s now where we will store all of the tag information that we get from the Canary sender service or from the collector. It could just as well not have been an OPC server or it might have been an MQTT broker that you’ve got deployed. But the principle stay the same, where the Canary sender service will run on as close as possible to that broker and then we’ll have a physical host for the Canary story itself. Now, one thing with the data is obviously the capability to have store and forward technology as well. Well, obviously, we’ve got our OPC server that will now connect to our plcs.

16:16
Lenny
We’ll go and enable the Canary collector on top of that device, and the Canary collector will, in tandem work with what we call the sender service. Now, the sender service will send all of the tag and the information out to the Canary historian side, where on that side, it’s got, obviously, a receiver side, a receiver service. And the receiver service will then push all the data into the Canary historian. Now, obviously, if we get a network break, we need to be able to cache the data locally. So that’s exactly what will happen.

16:47
Lenny
The data will store locally onto that specific box where you’ve got the collector installed, and the administrator of the canary system will be notified via an email that, hey, we’ve just went into a store forward mode so that he can actually go and work on the problem to try and rectify it. Now, when the connection actually comes back on, then we will automatically just take that cache data that’s locally on that box, and we all push it through, and it will automatically backfill the lost data that we have so we can cater for a break in network communication from that perspective. Now, nice thing about the sender and receiver service kind of architecture is that we can actually send data to multiple historians at the same time.

17:33
Lenny
I can send from one sender service to multiple receivers to push into potentially the same canary historian or a different one, and I can have the kind of vice versa scenario. So we can really go and create this fully redundant mesh architecture to push the data to whatever send the receiver we require from that perspective. So that’s quite great if you really need redundancy from a historian storing perspective. Now, the last part of this architecture is when we collect the data. Pushing it through with the storing forward is obviously now the actual historian itself. So how do we actually now save the data inside of the historian? Now, Canary is a NoSQL time series database. So that means it doesn’t use any other SQL type of database to store the data. It is optimized for process data.

18:29
Lenny
So it really works well with the kind of manufacturing data that we used to that gets generated by a manufacturing environment. And the best thing is all is you don’t really need special database administration skills or skill sets to maintain and to keep this database up because it uses a NoSQL kind of solution. So that’s great. So you don’t need special skills to maintain your database. It is extremely scalable. So we mentioned that we can cater from the most complex kind of scenario to the smallest one that we’ve got. So you can scale it from as little as 100 tags to start historizing data all the way up to 2 million tags. And we can even go and extend this a little bit further depending on your site and your enterprise.

19:18
Lenny
But we can also have the capability to store data to a corporate level historian as well. And we can either employ the dual logging that you just saw from the previous slide from a sender service perspective, or we can actually mirror data that’s already sitting in a historian to another historian. So let’s just have a little bit of an architecture discussion around that. So you might have a multi site organization, you might have a head office or corporate site, and people in that head office needs to be able to see the data. Now, there’s two ways that we can do it. We can either go and mirror the data from the one site to the historian, and we can say, okay, only mirror at midnight to save on bandwidth, et cetera.

20:01
Lenny
And the canary historian at the corporate level will actually pull the data from the site historian. Or we can deploy a kind of scenario where the data is getting pushed by utilizing the dual logging mechanism. So the site database can actually push the data into the canary story, and that sits on the corporate site level. So it’s just two ways that we can actually get data into a multi kind of site and enterprise infrastructure environment. Now, what is the performance when we start pushing a lot of tags? And does the performance actually degrade as we get to that 2 million timescale? Well, actually it doesn’t. So it doesn’t matter if you push 100 tags or if you push 2 million tags, it always maintain a 1.5 million writes per second.

20:52
Lenny
And you can get data out of the historian by a 2.5 million reads per second. So that’s great. And we don’t see that the performance actually degrade by time. And I actually have a little bit of a video on that. So what I’m going to do is I’m going to log into one of my senders. So you’ll notice that this sender service is currently connected to have 250 tags. And you’ll notice that send per second kind of buffer there, that it’s sending around 255,000 tags per second. So I’ve got four more of them. So you’ll notice that each one of these senders are pushing out 250,000 tags. They almost change at a second each. So if we add all of those together, we get to my 1 million count. So there you can go.

21:46
Lenny
You can see on the historian side where it’s receiving the data. You’ll notice that it’s licensed to have a million tags, and you can notice that the update time there per second is it’s almost pushing a million different tag values per second into the historian. Now, you say you probably need a massive computer or infrastructure to manage this. Well, actually, you don’t. It’s a normal Intel Xeon processor, a 3 ghz processor that we’ve got there, standard one that you can deploy in an AWS or in a virtual environment. It’s got 32 gigs of ram. So, yes, it’s got a bit of a beefy ram setup, but it’s only using about 16 gig from that perspective. So not much of resources require you think about what’s happening is it’s storing a million records per second. So that’s actually great.

22:38
Lenny
And when we talk about storing a tag, a million of them per second, what do we actually store? It’s not just the tag itself. There’s a whole bunch of properties that we store with the tag. So obviously, we store the timestamps. So when the data actually changed, we then store the value of the tag itself. We also store what we call the quality, or the quality from the OPC server that we get and say, how reliable is this tag value that we’ve got in the historian? So we store that as well, and we store potentially any other metadata properties, like engineering units, high, low set points. All of that kind of metadata is stored with this tag in the historian. Now, the historian does have what we call a loss less compression algorithm. So what does that mean?

23:26
Lenny
It means that we can go and safely say that our data compression is best in class. We compress data three times from what it would used to store inside of. Inside of a database. And very important is that we do not do any interpolation of the data. Every raw data point that we get from the sender service is stored into the canary storian. So just as you can store the raw values, you can go and retrieve all of the raw values that’s been done. No interpolation is done. And that’s why we can really say that it is a lossless storing mechanism that we have on the canary storing. So from that perspective, out of the box, it is ready. There’s no data loss because of the store forward technology and the lossless algorithm or compression algorithm that we utilize.

24:19
Lenny
And in essence, it’s really a database that you can depend on with very little DBA knowledge required to maintain this database. So if you’ve got any questions on this point on the collecting and storing, start pushing up your questions in the QA section, and we will get to that when we’re done with the presentation here. So that’s step one. Very simple. Get the data into the historian, and depending on your architecture, you might deploy different ways and different collectors to actually get the data in. The next thing that we need to do is we need to go and assign context to your data. Now, the canary system’s got a very unique way of doing this, and they’ve got a concept, what we call virtual views, that we can now go and deploy on the historian data.

25:09
Lenny
Now, I think this is something that everybody can relate to. This is kind of the standard of tag names that we used to inside of our industry, a whole bunch of codes and prefixes and suffixes that mean something to someone. And there’s normally a very good standard that is deployed with that, but these standards can differ from site to site. It could be that you’ve just bought a brand new OEM piece of equipment and that’s got a whole different type of naming convention. And in this case, when I look at these two data sets, it’s exactly the same data. It’s actually exactly the same tags. But from someone that needs to consume this data, they don’t really care about the engineering kind of thought and processes that went into creating these very unique tag names that we’ve got.

26:00
Lenny
They want to actually read it out as what that tag is. They want to know it’s the line boilers flow or the line boilers pressure. So doesn’t matter on how the data is stored. We can create what we call a virtual view to actually go and transform this, to make it readable for anybody that wants to consume the data out of the historian. Now, these virtual views are created on top of the data that we store inside of the historian. And very important to note here is that we do not alter the actual tag name or the actual data in any means or ways. The view just sits on top of the data.

26:42
Lenny
And when a client now browses to get the data out of the historian, it does it through the view service, and the view will then present the data in a much more friendly and readable manner. Now, you can have multiple views on the same data, again, not transforming or breaking any of the data. And the client can now have a view to the data with view one, and potentially have a different view to the same data. By utilizing view number two, and that’s what the power of the canary system actually allows you to do, is to create these different views. And you can even restrict clients to only view certain one of these views depending on the type of data that they want to go and collect. So that’s the beauty of it.

27:25
Lenny
We can create as many of these virtual users we want, and very important, they do not shape or alias or rename any of the tags or the data that’s already stored inside of the lead story. Right. So let’s have a look at a little bit of an example. So there’s the raw tags that’s been stored in the historian. Potentially, we can push that through a view, and we can have a complete different client facing view into that data or into the tag structure that’s being presented to us. So I don’t really have to know the engineering structure or the engineering tag name.

28:01
Lenny
I literally can read it out and I can figure out what the data is great for new people that you’re onboarding into your organization, especially if they only have to work on the process side of things that they already know and know what the data is looking at, what they need to see. And obviously, you can have a complete different set of tags coming in from a separate line, but that same view will keep that data for you in the same manner so that the client side, you don’t really matter if you change from line one to line two from that perspective. And I’ll actually demo a little bit of an example of that as well, to actually go and create one of these views inside of my story. All right, so that’s creating views.

28:44
Lenny
One thing that we can also do with views is it’s not just great for actually structuring the tags that we get from different lines. We can also utilize views to create assets. This is a very important concept, and you’ll see it when I actually build up my dashboards, that I can go and create assets and dependent dashboards. So the dashboard actually knows that I’m looking at a boiler, it knows I’m looking at a filler machine, it knows I’m looking at a main water drive. So trying to find issues through all of these multiple assets that you can have can be very flexible and very quick if you really employed a very good asset model, what we can do with the views is we can go and determine these different assets.

29:29
Lenny
We can determine how many of these different assets potentially are on the different lines. And it doesn’t really matter if a line has only one boiler or two boilers or whatever the case is, the views will go and sort that out for you. So it will go and say, you know what, on line one, I only have one of this particular assets. On line three, I’ve actually have two. And the view won’t break or destroy any of your data if it doesn’t find any of the different assets running on these different lines. But it makes it very simple for me now to actually see data on my client side. So I potentially can just say, you know what, show me all the temperatures for boilers or show me all the boilers where my temperature is above a certain amount of degrees.

30:14
Lenny
So it really makes analytics on top of the historian data extremely flexible by deploying this. So a little time you spend creating these assets in your data model will save you a lot of time actually creating dashboards and linking up tags. And I’ll show you that in the demo that I will do as well. Right. And again, a little bit of a reiteration there. It does not break and we do not go and manipulate the tags. The nice thing about the model views as well is if you add a new line and the view can figure out that you’ve added new tags to the system, it will automatically update with new assets. And that’s really great from that perspective as well.

31:00
Lenny
Obviously, that’s step number two, to assign this type of context to your data, and then you’re ready to actually now utilize the front end tools. And that’s all to optimize your operations. And we’ll use that by the client facing tools that we’ve got. Now, Canary has three type of ways that we can get the data out. First of all, they’ve got the tool that they call axiom. Now, axiom is our trending tool. We can build dashboards and reports with it. It’s built using HTML technology. And really you can do yourself serving reporting yourself. You don’t need special skills to create these dashboards or trends. You can literally do that by yourself and create your own little dashboarding and reporting scenario, some screenshots, but I’ll actually build it out in the example that I’m going to do here.

31:49
Lenny
There’s a few axiom screens that we can build, all based in our browsers, and we can get all the historical data that’s now being historized inside of our historian solution. So that’s great. And we can really utilize that to really quickly make decisions on our manufacturing data. The second thing that the Canary historian has is obviously an excel add in so you can still get your data out of the historian by means of Excel. It’s obviously therefore very quick. Ad hoc reporting needs to maybe monitor some stuff ad hocly and to really very quickly get data out. So very simple, kind of little bit of an excel add in that you get for Canary. We can now browse your tags and get the data out. So I’ve got a very simple video here just showing you that.

32:37
Lenny
So I’m going to just browse all the tags that’s currently in my historian. I’m going to look for all my tank levels, hit the apply button there. And all I want to know is what was the last value that’s actually been stored in the historian for all of these tanks? And there you go. Very simple. You’ve got all the last values. So you have kind of a snapshot of all your tank levels at this particular point in time that’s been stored. So it’s very quick and very easy to get your data into this by using the Excel add in. And the last way that we can get data out is they’ve got a whole bunch of APIs. They also have a standard OPC HDA server that we can get data out utilizing the historical data access component of OPC.

33:21
Lenny
They also have an OdDBC connector where you can actually write queries in SQL. So at no point is the canary historian at all closed for you to get data out of it. Okay. And that’s then the last part of the process is to utilize the client tools to get the data out and to obviously maximize your operations that you have. Cool. All right, so it’s time for me to do my little horse and pony show live demo here. So we’re going to do a live demo. This is actually a screenshot of Steve Jobs in, I think, 2014 doing the new iPhone launch. And he actually didn’t have WiFi signal and he also didn’t have a 3G card in the device. Just to show that even the greatest of live demos sometimes do get it wrong at some point or another.

34:19
Lenny
Right, let’s just quickly recap. So for those of the guys that were in my webinar last week, you will note or remember this kind of picture. This was the architecture that I utilized last week to build out our edge device, sending data to an MQTT broker and then utilizing ignition’s perspective module to create a real time dashboard for me that I can now access through my mobile phone. So just for the guys that haven’t been on that webinar, what I’m getting from my little solar plant that I’ve got here, I’ve got some inverters, those give me kilowatts and fault codes that I’m now getting into my ignition edge device by means of a modbus driver. And then that ignition edge just pushes out the data by the means of the MQTT Sparkler B protocol.

35:10
Lenny
And then obviously, the data is consumed in my kind of enterprise solar solution, where I could potentially have a lot of these little solar plants pushing data into one big enterprise solution. So what I’m going to do this week is I’m going to say, okay, now, how do we store all of this mass data that we’re collecting in a proper time series historian? So I’m going to go and add the canary historian to this. And there’s multiple ways that we can skin the cat, right? And it all depends on your architecture, it depends on your requirements. But technically, we could have used an OPCDA collector to get the data in. We could have also used an MQTT collector to get the data in.

35:53
Lenny
We could also use a native module that the Canary guys are busy writing for the ignition SCADA platform to push data from that ignition ScaDA straight into Canary. And we could have also utilized, potentially an UA connection if you have another PlC that’s UA enabled to get the data in. So there’s multiple ways to do this. As I said, please talk to us. Let’s talk about your architecture, and let’s see how we can help you to determine the best way to get the data into our canary story. So what I’m going to do is I’m going to lock some data from MQTT. I’m also going to connect it to an OPC UA server. I’m going to also push it via this new ignition module. I’m going to create a view, and I’m going to build a dashboard very quickly inside of my canary story.

36:45
Lenny
And if I’ve got time, I will also try and see if I can do an automated report. All right, so that’s kind of the steps that I’m going to do right here in the demo. All right, so I’m going to move over to my vm here.

37:02
Jaco
Sorry.

37:03
Lenny
I’m just going to stop this. There we go. All right, so you should be able to see my virtual machine. So on this box, I’ve got my Canary historian installed as well. So I’m going to open up the Canary administrative little program there. It’s going to connect to my historian, and I’m just going to make it a little bit bigger. So this is the administrative part that you will utilize to configure and maintain your historian. As we said, we don’t need very special DBA skills to do this. I’ll do everything inside this little admin utility that I’ve got now. You’ll notice that currently I do have an MQTT collector busy connecting, so I’ve got an active connection there, and it is currently subscribed to an MQTT broker that I’ve got in the cloud. But you’ll notice it’s not really updating.

37:58
Lenny
There’s no new values pumping into the system currently. The reason for that is I haven’t enabled my little edge device from last week to actually push the data out. So let’s quickly do that again. I’m going to go to my ignition gateway that I’ve got running on my little device. And just to recap what I’m doing with this is if we look at the configuration here, just quickly going to log in here, I’ve got this enabled to push data out by means of NQTT. So I’ve got the NQTT transmitter module installed here. And you’ll notice that I’m connected to the same broker. So there’s my broker there. It’s exactly the same broker that I’ve got canary pointed to.

38:40
Lenny
So technically, if I just reset the trial here on the ignition side, you’ll notice that I should start getting an increase in my values that I’m starting to push inside of my historian. There we go. You’ll notice that TVQ is moving, so I’m getting new data installed into my historian. If I go to the home section here, you’ll notice that my receiver is constantly getting updated values. My sender is pushing that in. So there we go. Demo done. I’ve got MQTT data in my historian. Not that quick. You’ll notice that I only got 46 tags in here. So what I’m going to do is I’m just quickly going to add some more tags for us to play with. So I’m going to open up the designer, the ignition designer here. I’m going to connect to my project that’s running on my raspberry PI quickly.

39:35
Lenny
And then we’re just very quickly going to go and add some additional tags in here. The purpose of this demonstration that I’m doing is that I want to show you guys that by utilizing MQTT, there’s no additional work that I need to do to get the tags into my story because they all use the same protocol the spark plug b protocol, it can automatically discover and add new tags as they get available or get pushed by my device. So let’s do this. Currently there’s 46 tags. What I’m going to do here is I’m going to add a little bit more tags to this device. So I’ve got a simulator here, got some sign tags that’s just pushing data. So you notice there’s already three of them pushing data. Let’s increase that. Let’s add a few more underneath here.

40:25
Lenny
And all I need to do and tell the system is that I’ve added more tags. All right? So you’ll notice on the transmission control, all I need to do is enable this refresh. Now, when I’m going to refresh, you’ll notice that this subscribe tags is going to jump. I’m going to hit the refresh here and you’ll see that we will increase our tag counts and these tags will be automatically created inside of the historian. There we go. I’ve got 53 tags. So if I go and look actually in the historian itself and I open up this element eight data set where I’m actually storing the data for today, you’ll notice that there we go. There is all the new tags that I’ve just been enabled on my edge device, and it’s already starting to historize data. So that’s great.

41:13
Lenny
So I don’t have to create the tags physically in the historian. I can utilize it as is from that perspective. Cool, right? So very quick, very simple, utilizing the MQTT device, I can go and very simple, add that data in there as well. Right. Let’s open up the client tool just to see how this data looks. So I’m going to open up the axiom tool. So there we go. And this is the HTML five tool where I can now add trends. So you’ll notice there’s all the tags with their horrible tag names that I’ve got here. If I look at my different inverters, kilowatts that I’ve got, I can go and add these to my train tool. And you’ll notice that it will start showing me the data that I’ve got.

42:01
Lenny
Obviously, I’ve only enabled it now, but I do have some historical data here that I can show you guys how it looks. So there we go. There’s some of my inverted data that I used to arrive a little bit earlier. So very simple kind of trending tool that we’ve got. But what I really like about this client train tool is that I can actually do front end calculations on top of the trend. So I can very simply go on this little edit trend button here and you’ll notice that I’ve got a calculator option here. And I can very simply say, you know what, take my tags that I’ve got already here. So take the inverter tag, add these three tags together and let’s do an average.

42:48
Lenny
So I’m going to add them together and I’m going to divide them by three to get a very rough average. And it will go and add that on top of my trend. So there, I’ve got my calculation tag at the bottom here. And very simply what I can do from a visualization perspective, I can say, you know what, I want to know if my inverters are below my average. So I’m going to go and enable a low limit. I’m going to link that low limit to my calculation and I’m just going to call the area there on top of that. So very quickly, what you can see here is the outliners where inverter number three is below my average. So very simple to do that. Very simple to get these from the front end kind of calculations up and running.

43:34
Lenny
Let’s just look at these tag names again. So I really need to understand where this data is coming from. I need to understand the tag naming group. It’s not very simple for me to read that. So let’s maybe go and address that by creating a view on top of this data. So I’m going to go back to the canary admin. I’m going to add a view on top of the data that I’m already historizing. All right, so currently you get one standard view, that’s the raw view into your tags as it is historized in the historian. But let’s go and create a new one. So I’m going to create a new view. I’m going to call it my solar plant or my solar view, if I can spell this afternoon, right, I’m going to link it to the source tag that’s in the historian.

44:20
Lenny
And obviously all my data sits in that element eight data set that I’ve received and storing the data for. Right, hit the next button, hit the create button. And now I can go and create some rules. All right, so let’s create a rule. The first thing that I want to do is I just want to go and rename the tag names a bit. So you notice that it’s got quite a long tag name called MymqtT Group Edge I mean, we really don’t need to know that it’s coming from an edge device. So we can go and remove all of that up until there and we can remove all of that. And I know that this is all coming from solar plant number eight. Really am struggling. There you go. All right, so I’m changing to replace that horrible name with solar plant a perfect.

45:17
Lenny
So let’s apply that. Let’s add another rule. Again, there’s a little bit of a duplication in the word inverter in my tag name. So I’m going to go and replace that as well. So I’m going to go and replace inverters. Inv. I’m just going to replace that with the word inverter. All right, apply that. And then lastly, what I’m going to do is there’s also an inverter with an underscore in there that I don’t really need. And I’m just going to replace that with a blank space. And now my tag name looks quite nice. Solar plant. A dot inverter one and then the false. So that’s perfect. So that’s a very nice readable tag name format that we’ve got. And the last thing that I want to do is I would like to identify all the inverters in my plot.

46:12
Lenny
So I’m going to add an inverter asset to this view. So in this case, very simple what I’m going to do this time, I’m going to go and add an asset rule. And every time it finds an inverter, it’s going to associate that with an asset of type inverter for me. Cool. Hit the okay button. And there we go. Let’s apply this. Let’s see what it’s actually done at the bottom here. So first of all, it’s identified solar plant a. It identified all of my different inverters assets. And each and every inverter has a kilowatt and a fault that it identified. So that’s great. So it took that physical name, that long, horrible tag name, and it’s created me a view that’s readable, that I can actually read inside of my story. Okay, now let’s go and do one thing.

47:10
Lenny
I’m going to close this view. I’m just going to quickly rebuild it so that it is available immediately for me to use in my application. So I’m just going to rebuild that view. Right. So now let’s go back to our front end to our axiom client. Let’s create a new application here. So I’m going to go and create a new application, and this now allows me to create a little bit of dashboarding inside of the solution. So by very simply just creating an asset template link. So I’m going to drag this asset template onto the screen here. It already identified that I’ve got inverters as an asset type inside of my historian. So I’m going to apply that to the link, and you’ll notice that it immediately identified the amount of assets that I’ve got.

47:57
Lenny
What I can very simply do is take a doughnut gauge, drag it on top here. You’ll notice that it will add it for each and every asset type. I can link this to the kilowatts that I’ve got. So there we go. Maybe that range is a little bit out there. So let’s just do the scale high to 10,000 and it will apply to all of them. Let’s add a label to. Just show me what these assets are. So there we go. There’s the different labels. And let’s last thing, just let’s add a sparkline chart onto this as well. And then let’s link the sparkline chart to the kilowatts that I can see how my kilowatts is showing. Great. So there we go. Very simple. I don’t have to link things ten times for my ten inverters that I’ve got.

48:45
Lenny
If I put this in live mode there, I’ve got all my inverters, all the kilowatts already built out just by using the power of assets inside of my view technology. All right, so here’s an outliner. Here’s a problem child. Inverter number nine is quite low. And by simply doing a filter at the top here, I can say, give me all my kilowatts that is below 5000. Whoops, I think I spelled wrong. And there we go. Inverter number six is giving you a kilowatt reading that’s below 5000. So very simple. If you had multiple of these inverters, by just using the assets and be able to go and ask these type of questions on top of your asset model, you can very simply go and see all of the different inverters that’s actually giving you problems. We can also go in and do that.

49:50
Lenny
If I now identified which inverter is the problem child, you might want to do a little bit more deeper dive analytics to that. So let’s extend this application here. I’m going to add a panel to this application here. Again, I’m going to link my panel to my asset as well. So I’m going to link it to my inverter asset by default. It’s going to show me my soda plant inverter. Let’s apply that and let’s change it to be an inline selection so there I can select all of my different assets. What I can now do is add full functioning trend graph onto this as well. Let’s add a trend quickly. Let’s link that to kilowatts so it will start showing us the kilowatts there and at the top here.

50:44
Lenny
Let’s maybe just add again an asset label to see which asset did we select as well as just a simple label or a value of the actual kilowatts currently in the system. So I’m going to link that to my source tag of kilowatts as well. Perfect. Let’s put this in live mode. Now by selecting the different inverters, it will automatically update all the data for me in my panel and I can actually go and utilize that to do my fault finding for me. So spend a little bit of time creating these views and these assets and it will save you a hell of a lot of time in building out these dashboards and really give you great dashboard and content to search and look for the faults that you may encounter just to save this application. So I’m very quickly going to save it.

51:37
Lenny
Let’s save it into my public folder here and let’s just call it my solar application. Let’s save that. And there you go. I’ve got a save dashboard and if all goes well, let’s see if I can get my phone here on the side as well. I’m just going to move this a little bit out of the way. Let’s get the phone in here. So there’s my phone. Only apps you need during lockdown is Mr. Delivery and your go to meetings. But let’s quickly go to my historian here. All I’m doing is I’m browsing to my historian, specify the port for the axiom and there we go. I can see that same dashboard and if I’m happy I open up my application here. Sorry, go to the public.

52:33
Lenny
There’s my solar app and I’ve got that same solar app that I can open up and it will render it here on my phone. Right? So that’s great of using the HTML five technology. Obviously I can render these applications on my phone and it’s full functioning from that as well. And I can obviously select my different assets that I want. So that’s great that it actually works on the phone as well. All right, so that’s from a demo perspective, quite a mouthful. Let’s just get the phone out of the way. Let’s get back to the presentation here, and let’s quickly finish off here before we wrap it up. Okay, so I hope you guys saw that with the canary system, it gives you everything. It gives you the collectors, it gives you the store and forward capability.

53:28
Lenny
By default, it will include 100 tags as well. When you purchase it, you can create these virtual views. One thing I didn’t get to a little bit running out of time is the calculations, but maybe we’ll have a separate webex section that will just handle that part of it as well. Now, you always have the question about where does the data sit? And can I have it in the cloud, and can I have my own private service? So definitely, Canary does have the capability to either host the data yourself in your own private cloud, or on a physical box, or they can actually spin up a cloud infrastructure for you and they can host it. And it’s very simple to get a canary system up and running.

54:14
Lenny
All you need to know is the amount of tags that you need and the amount of clients. All of this is available on their website. They’ve got a very simple pricing calculator there that you can select. Do you want to use their cloud hosted solution, either perpetual or from a subscription price? Or do you want to actually build your own server and host your own data? And you can go and play with the calculator and it will give you kind of the pricing right there on their website. To give you guys an idea for a very small solution, we’re talking about the intro solution. So 100 tags. One concurrent axiom client that I use to build up the dashboard, that’s a perpetual or upfront cost of about $4,000. Or you can do that in a subscription model of $135 per month.

55:00
Lenny
And by playing with that little calculator, you can go and build up a little linear scale for you on your different applications. In this case, if you bump that up to 1000 tags, it works out to be $5,350 or $180. On the subscription perspective, 5000 tags with three concurrent users, $14,000, roughly $500 a month. If you bump that up to 20,000, you can see we’re getting to $50,000 there, or about $1,500 per month. And if you get to that kind of point of money, then they also have a no licensing limitation solution or unlimited solution where you can go and pump as many data into your historian for a perpetual price of around $90,000, or about $3,000 per month. We also have enterprise agreements.

55:52
Lenny
So if you do have multiple sites, if you do work for an organization, potentially multiple sites in a kind of a typical enterprise solution, we can offer enterprise licensing as well. Getting started is very simple. Step one is install it and play with it and get a proof of concept going. We do offer 90 day trial licenses to get the proof of concept up and running. And obviously, the nice thing about that is if you’ve done your proof of concept, the only thing you really need to do is to license it, and then it becomes yours. It is a fraction of the cost of other historian solutions out there. And the last thing, but as you saw, it, is quite, very simple to do, but is to just deploy it and train a bit.

56:37
Lenny
And with the first year of purchase, you do get free support for a year, as well as to help you setting up and configuring your kinetic solution. All right, I know that was quite a mouthful. I think we’re a little bit short on timing. We’ve got about two or three minute lifts left for q a. So, Jaco, I don’t know if there’s any questions that’s been locked so far.

57:03
Jaco
There was a direct question. Lyn, before we get there, you asked me to remind you about the poll that you wanted to run. I think we have a minute to do that. You should see that at the bottom of your screen to kick that off.

57:18
Lenny
Sorry, I need to just get this. All right, so we do have a little bit of a poll before we end off here today, and then it’s based on the option about how do you feel about hosting your own data or let somebody else host your data? Are you guys feeling that cloud technology at such a point that you are feeling happy that you can host your historian data in the cloud, or do you guys still feel that? I really need my data close. I want to go and host this data on my own premise, and then obviously, there’s a hybrid solution where you can have maybe your potential, your own private cloud. So we just want to get a kind of an idea of what you guys feeling is by utilizing cloud technology to host historical data. Thanks, Jacob. Cool.

58:07
Jaco
The only question that I can see that we received directly was from Graham. Can these charts and dashboards be embedded in your scada or in a Scada?

58:18
Lenny
Yes. Good question. Graham. Sorry, I didn’t have a lot of time, but, yes, being that it’s HTML, five pages that you can just use a normal browser control and just point it to the URL. You can definitely create a page in your ScaDA most SCaDA systems, and definitely the ignition SCADA system has the capability to embed a web browser control in that from a Scada perspective. And you can really create a nice real time dashboard from your real time data that you’re getting from your real time sources with some historical information from the canary story. And you can do a mix and match of that type of data onto one Scada screen that will really give you valuable information with the ignition SCaDA system.

59:06
Lenny
They also do have quite a nice integration where you can actually get the historian data back and even utilize some of the ignition charting capability to do that. So there is quite good integration between those two products as well. All right, so I’m going to end the poll. Let’s see the results here. So you should see the results there. Okay, so you guys are still a little bit scary about this cloud thing. Seems hybrid is kind of the biggest winner then still on premise as a preferred method, and then lastly on that is to actually host it in the cloud. Cool. Thanks guys for that. So there’s the results on the poll. Cool. Any comments maybe on that? Jaku?

01:00:00
Jaco
Lenny, thank you very much for your time and thank you to everybody else for your time. If there’s anything that we are out of time, 1 minute. If there are any other questions, I think we just received one or two. We’ll make sure to cover those offline, but let us know if there’s anything else that we can help with on anything historian or anything on canary specifically. Thank you very much.

01:00:23
Lenny
Before we end off, just very quickly, next week, last webinar for the series, in our three series learning virtual learning series, what I’m going to do next week is I’m going to take flow, the information platform. I’m going to put this on top of my inverter data that I’m now historizing in my canary, and I’m going to add some very cool context to the data. We’re talking about a solar farm, so potentially I can add context of when does the sun rise and when does the sun set to determine how much energy did I get through my actual Sunday, if you kind of get my drip. So I’m going to add some very cool context to this data, create some sexy dashboards with flow. So yeah, please look out for the email. You’d probably be very early next week for the registrations on that.

01:01:11
Lenny
And please register and join me next week for the last one in the series with that. Thanks. Guys, guys, same place, same time next week. Take care. Thanks. Bye.

You might also like