Search
Close this search box.
Build Your Digital Infrastructure. Register for the ELEV8 2024 Event Today.
By Clarise Rautenbach
23 April 2020

Explore How You Can Maximize The Value Of Your Data With Canary

Share

Watch the video now, or bookmark this page.

Transcript

00:04
Lenny
All right, I think let’s get the show on the road. Thank you everybody for joining us today. Thanks for making the time available. My name is Lenny from element eight. This is the second part of our webinar series during this lockdown period to educate a little bit of our partners. And today what we’re going to focus on is the canary system. Now, we are very privileged, and Jeff, thanks a lot for being on the call so early in the morning, but we’ve got Mr. Jeff Gneppo on the line. He is the executive director of business development for Canary Labs and Jeff is on a mission. So Jeff is about giving us cost effective solutions to easterize and store a mass amount of sensor data with some keycast analytics on top of it. And that’s pretty much his role. 


00:56
Lenny
One other thing I can say is Jeff is time series data. So thanks, Jeff, for joining us this afternoon. We also have for Clarice Rotenbach on the call, our marketing manager at element eight. Clarice, thanks again for getting this webinar sorted, handling all the admin, and then we’ve got our md extraordinaire, Yakuma Quad on the call as well. Before we get cracking and me handing over to Jeff, I just want to quickly have a quick introduction into the best of breed offering that we have at elements eight. Now, as we stand today, we, as elements eight, is the proud authorized distributor of the ignition scalar solution, the Canary historian and the flow information platform. A best of breed technology stack that offers a nonsense, bespoke and unlimited licensing model. Cost effective and flexible solutions without complexity, backed by responsive, friendly and accountable technical support. 


01:56
Lenny
Now, our journey with Canary is almost about four years old already, as the flow team has built a connector to actually integrate into Canary. Now, with that, it’s offering one cohesive, productive environment where we can allow our users to measure the contribution and impact of each other’s team members and business units to the overall outcome of your business, easily and through a nontechnical experience to give the power of information to the people, the true innovators in your business. We don’t have a mission statement. We are on a mission to breed new life into our industry and provide the best of breed technology for all. Thank you very much. And with that, over to you, Mr. Jeff. 


02:42
Jeff
Well, thank you, Lenny. I appreciate it. And despite some technical difficulties on my side this morning, we made it. So I really look forward to this time. And speaking of time, let’s talk a little bit about what to expect today. Agenda for this webinar is to start with, some slides to go over the canary system as a whole. Then we’ll move into a live demo to demonstrate not just the canary solution, but the integration into other platforms. We’ll talk money, cover our pricing, and then Lenny and I will field some questions with some answers at the end as needed. So what is it that integrators and end users are really trying to do with data? Ever thought about it? Really, it’s just all about maximizing the value of the process. Otherwise, why would we go through all of this? 


03:39
Jeff
So our goal is to provide easy to use process data. This doesn’t have to be complicated. We want to make it simple. We want to help automation experts actually automate their workflows. I can’t speak to how many times I’ve watched capable men and women manually poring through trend charts. There’s got to be a better way to use their time. Ultimately, our goal, help you help the team unleash their inner rock stars. So what’s holding you back? What’s it look like right now that’s causing problems? Well, number one, and this is a biggie, is database management. Just 5000 tags over seven years could produce more than a trillion historical records. Who’s going to manage that? Coming right into that question, the itot balance. Operations has a job. It has a job. 


04:34
Jeff
Well, if you’re telling operations they have to manage a database, they probably don’t have the skill for that. If you’re telling it, they have to manage the database. Now we have to get it and ot to agree even more. And then finally, it’s got to be affordable. Most of these solutions are so outrageously expensive, you can’t ever justify even trying them out. So Canary really wants to be a guide on this. We want to leverage our 35 years of experience, 18,000 installations, and pretty much every vertical you can think of around the world to help you do just this. Unlock the rock star inside of your team. And we’ve helped a lot of really large companies with very large and complicated systems. A couple of pipelines that I can’t put up here that have over 70 historians throughout their pipeline. 


05:29
Jeff
But we’ve also helped a ton. Thousands upon thousands of small systems, 500 tag systems, 5000 tag systems. So we know what you’re looking for. And basically it’s the idea of, I want to be able to store everything. I don’t want limitations around how much data I can put into my archive. And you know what? If I’m ever going to do machine learning, if I ever want to talk about AI, even if it’s five years down the road, I need to have really good data granularity. I need to pull faster. Additionally, I want open access to this database. Sure, I need to put restrictions for security, but I want to make it easy for my team and collaborators to get into the data and to analyze it. And if they’re going to analyze it, they need some good software to do that. 


06:21
Jeff
They need tools that will help them to interpret and to act. And that’s really exactly what we want to accomplish with our solution. We call it the canary system. It’s a group of integrated software, almost all that we have built over the last 35 years, that is designed to help you store large quantities of process data at speed, give secure data access, and most importantly, provide end users with self serve tools. We want to get rid of the days where a client asked the Scada team to build them a trend chart. We want your team or your end user’s team to be able to build their own dashboards, build their own displays, and to do it with drag and drop ease. So our system is best explained if we look at three steps to maximizing the data. 


07:18
Jeff
And step one is how we would collect and store all of the data that you produce inside of step one. I’m going touch on three different pieces of software, and we’re going to start with how we actually collect data from devices and SCADA systems. The Canary collectors help you reach out and grab Spark Plug B, OPC, Uada, and data from SCADA systems. Additionally, we can also read SQL databases historically or in real time. We can upload CSV files, and we have a web and. NEt API. So all of these different collection sources are going to start. Bringing the data into the canary system architecture could look like your typical architecture, where you have an OPC server that we could place our collection software on, then point that collection software to a data historian, to the Canary historian. 


08:17
Jeff
On an MQTT architecture where we have a broker that’s more central, we would do the same thing. And on a ScaDA typical ScaDA architecture, let’s say an ignition architecture where you could have both ignition edge and an ignition server or ignition gateway, you can take Canary and take flow software and put them on the same server right there in the middle. Additionally, we also use store and forward with our collection. So where we have our OPC server or MQTT broker, we would install our collector software also paired with what’s called the Canary Sender service. If we direct the sender service across the network to where the historian is installed, we’ll see a receiver service as well as the Canary historian. So the sender has a pretty simple job. 


09:13
Jeff
Pick up the data as it comes from the collector and publish it out to the receiver service so that the historian can archive it. But sometimes networks are unstable. When that happens, we want to make sure that we can cache all of the data that we’re logging local to the sender service, and we’ll actually write it to disk. Additionally, we notify your system admin. That way, you know that your network is down. Finally, when that network comes back up, all of the data that we’ve cached local to the sender automatically backfills and gets sent to the historian. Additionally, every sender service can point to multiple receiver services or multiple historians. This allows for redundancy with very simple setup, and you can push multiple sender services into multiple historians as well. 


10:11
Jeff
So you can place these collector services anywhere that you’d like in the field and point them to multiple historians. So when we talk about the Canary historian, what is it specifically we’re talking about? Well, the Canary Historian is a NoSQL time series database that was developed over 30 years ago specifically for industrial automation, and it’s been optimized for process data and for performance. Best of all, you don’t have to have any database administration skills to manage this solution, so operations can securely and comfortably own the canary. Install the historian is highly scalable. It goes from 100 tags to over 2 million tags on a single server. And that means the same install that you would use for a small system can grow over time just with changing licensing until you want to make it unlimited. 


11:10
Jeff
Additionally, this is a level two, level three, full enterprise solution. I can use the same historian at a local site as I could use at my corporate instance, and we give you two different ways to move data from local level two to a level three corporate historian. Imagine we have two local sites, and we have our corporate historian at, right. The first service we can use is our mirror service to pull data on schedule from the local site. So we actually reach down into the historian and pull the data we want back up to the corporate historian, and we can schedule that interaction. 


11:50
Jeff
Additionally, we can also dual log, so we can take the same MQTT broker collector or OPC collector or SCADA collector and point it not just to the local historian, but we can also point it to the corporate historian in real time. This allows us to get data as it is happening in both the local and the corporate databases in great news, our performance, it doesn’t change. Unlike SQL databases, the Canary historian has the exact same read and write performance, no matter how large the historical record grows or how many tags are in it. And that looks like this. We have one and a half million writes per second on a standard server architecture and two and a half million reads per second that we can achieve. 


12:41
Jeff
Now, a lot of times when you start talking about those type of big numbers, people will start to question that and say, yeah, but what type of hardware are you using? So we’ve done testing. We always do testing on our product, and we wanted to create four different sender services, each one producing a quarter of a million tags. So what you’re seeing are four machines spooling up, and we have a total of a million tags that are getting sent out of our sender service. We’re pointing those million tags across the network to an AWS server so that we can demonstrate a real time read of a million tags per second. So we’re going to look at our receiver service, which is going to show a million tags couple, I guess that would be trillion updates. I’m sorry, billion updates. 


13:35
Jeff
And you can see our updates per second. Storing forward is actively working because of network constraints. And so what type of hardware did we use? Well, your really basic AWS machine or basic server, three gigs, and I believe this had six core. Our goal keeps cpu under 20% and memory and live between 50 and 60%. We run these tests and we run them generally for an entire week, and we just find that a million updates per second is really not a problem for the historian. When we talk about tags, you’re already familiar with the tag. I’m sure it’s the timestamp, the value, and the quality. But additionally, when we write tags to the system, we can also write over 100 custom metadata properties. So you can go far beyond just engineering units and alarm set points. 


14:28
Jeff
You could add anything custom, like inservice date or model information, perhaps longitude or latitude. And best of all, when we archive the data, we use lossless compression. There are very few historians that actually use lossless compression algorithms. Most of them are interpolating the data or using some type of swinging door that causes you to lose the original raw values. Our compression gives us best in class performance. We actually compress the raw data values to a factor of three. So in all, when we talk about collecting and storing your data, this solution is out of the box, ready to go. It’s a five minute install. You don’t have to worry about data loss. On the logging side or on the archiving side, ultimately giving you a database that we feel you can depend on. 


15:24
Jeff
I will mention for Q A if you’d like to use the zoom features and send questions to the chat, feel free to do so. That way we can log them and answer them on the backside. And next we’ll talk about how we would assign context to all of this data you’re collecting. We don’t want you to have to drink through a firehose. This idea of trying to consume more data than what you actually need can be prohibitive. So we wanted to make sure we had a way to provide your client with just the data they need in the context they need it. And that’s where our virtual views piece of software comes in. Let’s take a look at ten tags. 


16:06
Jeff
These ten tags follow obviously some type of standard naming convention, and we’re writing them into one of our data sets inside of the historian. Maybe it’s a piece of OEM equipment in the factory. And then here are another ten tags, and guess what? They are describing the exact same ten points as the first step. But for whatever reason, maybe a naming convention change, or a different manufacturer of this piece of equipment, we’ve got ten completely different tag names. Now, your operations have been able to adjust to this over the years. The ScaDA systems have been programmed. The operators know their tag names. But there’s a problem. The problem is your clients that are trying to access the data to do reporting, to do analytics. They really don’t want to have to understand the cheat sheet for tag names. Wouldn’t it be incredible? 


17:05
Jeff
Imagine if you could provide your client a unified tag naming convention without ever having to reprogram a PLC or change a ScADA system. That’s exactly what we’re hoping to do. Using our virtual views, we want to be able to take the tags in data set one and the tags in data set two, and give you the ability to let the client see them in a standardized format. So our virtual views don’t ever actually affect the historical record because our view service sits on top of the historian. Here’s what it looks like. The view service is the go between, not just for the historian and the client, but also other internal pieces of the canary solution. That means that I can use the view service to influence how the client sees the data inside of the historian again, without ever actually changing the data. 


18:04
Jeff
We reshape and we alias the historized tag formatting using the views, and you can create multiple views of the same data. So you can have two different views or three different views. And you get to set the permissions based on clients login credentials of which view they see and which views they have access to. They have view one, they can have view two, or I can have multiple clients. And some clients have access to only one view, while other clients have access to all of these virtual views. Let’s take a real world example here. Approach. We have our tags. We apply a virtual view to these tags that are in the historian, and those tags remain in the historian as they were written. But my client now sees the tags when they browse for tags in this format. 


18:58
Jeff
Same thing, if I look at that second set of tags, there’s the format the client sees. So back to the first set of tags. Notice the client facing view. The only thing that changes when we move to the second set is the uniqueness. Things like the line number, the boiler number. But our tag naming convention stays absolutely identical between the two. So how do we do this? Well, we use the power of regular expressions. You can write Regex rules, they’re going to help you to find and replace tag structure in mass. But the cool thing about Regex, we can keep the uniqueness of the tags while we’re doing it. 


19:38
Jeff
And additionally, if your tag naming doesn’t quite have the logic that you would need to be able to write rules, we can also read tables from SQL and CSV files and reference those to help build these virtual views. And I’m going to show you this in real time during the demo. Once we’ve built out these virtual views, then the next logical move is to build these into assets or to group our tags into assets. So if we take an example of a site with a couple of different lines on it and some standardized equipment on each line, what might that look like in our model? So here we have our ten tags that do describe boilers and fillers and some water main, and we can group those tags by the equipment that they describe. So my first three tags are boiler tags. 


20:33
Jeff
And notice that I’ve taken the tt and just called it temp. I’ve taken a pressure and just called it pressure. I don’t have to keep the complicated tag names because again, this is just a virtual model. And if I have multiple different configurations, maybe one line has a boiler, but line three has two boilers, the system adapts accordingly. So when I look at my boiler tags for line one, line two, and then line three, I would see them in this format. So why go through all of this? Well, there’s a huge benefit, because now my clients can actually request for data, not just for a tag list, but instead just call tags based on an asset. For instance, a client could say, show me all boiler temperatures. 


21:24
Jeff
Or better yet, what they really are looking for is to find boiler temperatures that are outside of spec. So now they can filter based on tag or asset condition. Show me all the boilers with a temperature over 220. Our virtual models do not break if the assets don’t match perfectly. In fact, if you have one asset that is missing a few tags, it still shows up. It just doesn’t include the tags that are missing. And this is the most incredible portion of this entire virtual view. When new tags show up inside of a data set, the virtual view automatically rebuilds itself, discovering those tags, forming them into assets, and then publishing them to the client automatically. No human interaction is needed. All right, let’s talk about the second piece of data context, our calculation engine. 


22:20
Jeff
A wise man named Lenny once demonstrated to me that our cars dashboard is actually a lot like our automation systems that’s providing us raw data, right? Well, additionally, the cars dashboard also gives us condition based data if the check engine light comes on. We know that we’ve had some rules that have been violated, and it provides us estimates or calculations about what we can expect in the future. Some calculated KPIs, like the range of I should have this changed to kilometers, I suppose, in our tank. Well, we want to do the exact same thing with the database. And so we’ve built a calculation engine. You can build out a single calculation, a metric, maybe temperature tag and pressure tag, and then apply it to all of your assets. This works in real time, but it also has the ability to backfill. 


23:20
Jeff
And we give you over 70 different functions available to build these calculations. So, for instance, we could transform our raw data from temperature and say, show me a running 60 minutes average of my temperature, take the filler tags that are bottle count and bottles rejected and create a new tag. Apply it to all fillers that shows me the percent of rejected bottles. Or take my water main flow and roll it up into a daily accumulation or a daily volume and tell me what that is per line. And now take the lines and roll them up into a site number. A lot of power behind the calculation engine, and I hope to show that to you as well in our demo, finally, is our event monitoring piece. 


24:07
Jeff
And with event monitoring, we have the ability to take these assets and write rules that will notify you when an asset is outside of your specification. We can even report on this and provide analytics around the duration of the event. Here’s exactly what this might look like if you had a tag that was level, percentage and a temperature tag. If my tanks level falls below 25% and the temperature is over 80, let’s start an event. We run the event after a week. Here’s the report that we receive and it looks like all of our assets are listed in column a. We have our start and stop times and we have the duration of the events. And then we have customized analytics that you as the admin get to choose. 


24:57
Jeff
So during these events, I’d like to know, what was the minimum level of the tank, what was my average temperature and when the event ended, what was the outgoing temperature on that tank. All right, last piece maximizing the operation. This is where we talk about how we provide clients the ability to see the data as well as connect to other parts of your system. Our first piece of software that I’m excited to show you now and then demo later, is axiom. Axiom is our trending, dashboarding and reporting tool. It’s all built inside of HTML and this is the tool that’s going to give your users the self service option on their reporting. Axiom dashboards can be absolutely beautiful. You can put as many trend charts or trends on as you’d like, include tables with live tag values, historic tag values. 


25:52
Jeff
You can place symbol, graphics, gauges, spark charts. Anything you’d like to visualize, you can do it in axiom. The HTML portion guarantees you that it’s going to work on smartphones and tablets. And the editor, as I’ll show you later, is built right into axiom, allowing you to just drag and drop these screens as you go. Additionally is our excel add in. We want you to be able to connect the tool that everybody already knows how to use to your historian, and it really does nicely handle large amounts of data. Quick video to show that, I’m going to just fill into column a every tag in my system. That’s a level percentage. Apply that and then I want to know the last known value of those tags. Three clicks and I’m done. 


26:41
Jeff
Just like that, I’ve built out an immediate report of the last known value for every level percentage tag that I have. And then finally all of this data has to get out to other systems. And that’s where the canary connectors come in. We have the ability to publish data using our publish service. We provide a free web API that you can connect to. We also present our historian as an HDA server and have built in ScADA connectors, which I’ll demonstrate later as well as give you the ability through ODBC to make SQL like queries against a non SQL database. And I’m really excited to announce that coming next month will be a 6th item on this list. We will be publishing via MQtt Spark plug as well. 


27:33
Jeff
So if you’re serious about MQTT and Spark plug architecture for your organization, we want to make sure that we can publish as a publishing client back to your MQTT broker. And it makes it so easy to get data into other platforms like flow and ignition with this feature. All right, so now we’re moving over to the demo side, and here’s the goals. I want to show you how we would pull data from a SCADA system. I’m going to use ignition. We’re going to create an asset model from those tags. I’m going to build a calculated tag and send it back into the asset model. We’re going to design an axiom dashboard. We’re going to build an automated report, and we’re going to integrate all of this right back into ignition. And I have 20 minutes to do it. So let’s get cracking. 


28:25
Jeff
It’s a live demo. What could possibly go wrong? Hopefully nothing. All right, let me in the show and we’ll get started. Okay, so the first thing I want to do is from the admin’s role is where I’m operating right now. So I’ve got my admin hat on, and I’m on my Canary historian, and I’m going to open our admin panel. Here it is. And I’m going to come into our historian, and I’m going to build a new data set to send my ignition tags to. I’ll drop to configuration. I’ve already got one data set called demo one. And we’ll just call this SA rugby rules. Sorry if there’s any Australians on the channel. I didn’t even spell rugby right. That’s okay. We’ll remove it that easy. See that? And we’ll do it again. SA rugby rules. That just proves that I’m an american. 


29:25
Jeff
We don’t even get a chance to watch it. So there it is. Now let’s take this data set and start sending data to it. I’m going to hit my ignition gateway, configure the gateway. And I’ve already installed my Canary module here. So all I have to do is build a new logging instance. We’ll call it demo, tell it where the sender service is currently located. I have security turned off because I’m on my own laptop localhost, and then the data set SA rugby rules. All right, create this. And now demo will show up in my ignition project as a storage provider, which means if I open up my designer, I’m running ignition version eight, I think seven, and I’m still operating out of vision on my side. So let’s open up my vision project here. I have the MQTT distributor and engine modules installed. 


30:55
Jeff
And if I look at my edge nodes, there’s my data that I’ve generated and there’s all my tags. So I’ve also built a, I’ve set up my default historical tag group as needed, and I’m going to be applying that when I historize these tags. So we can use all of ignition’s built in functionality around historian. Ignition stays pretty agnostic about which database you send tags to. It doesn’t have to be SQL, you just need an easy way to publish it to Canary. So we’ll turn history on. We’ll say I don’t need to use the SQL historian. Instead, I’m going to use the demo configuration I just created, and I’m going to use the tag group default historical and hit ok, history comes on. Oh, you know what I didn’t do? My apologies. I did not restart when I installed, I didn’t restart my module. 


31:57
Jeff
So there’s my live demo. What could go wrong piece. Let’s get the module restarted and activated. And there’s my sender session connected. And you can see the tags are now moving over into the historian. So there’s my data set, there’s my tags being written, and if I pop into it and look at the file, there’s each of the tags that are coming in with the full tag path from ignition being respected. All right, so the next thing that I’d like to do is take all of these tags, and if you’ll notice real quick, if I look at the tag name on line one, it’s kind of following those ten tags I showed you on the PowerPoint. Then over here on line two, I’ve got a completely different naming convention. The rest of the tags follow one of those two naming conventions. 


32:59
Jeff
So what I want to do is create a virtual view that will standardize that naming convention. Now, I have to warn you, if you don’t know regular expressions, there’s going to be a whole lot of confusion on this part. But it’s important that you see how we do it from the admin perspective. So we’ll just call it a demo view. I’m going to build it on top of the data set that we’re logging the data to. And this is now where I’m going to work out my virtual model. So I have 56 tags that are coming in from the data set. And the tag names look like this on the right hand side. As I add regular expression rules to modify and reshape those tags, I will see them change here on the left. 


33:50
Jeff
Right now it’s a pretty much one to one match other than it’s dropped the data set off the front of the path. I’m going to start by cleaning up the front end of this tag name. I don’t want anything before the line designation. The line is universal, but then the number of the line is unique. So I’m going to preserve that. So the first thing I’ll do is I’ll capture everything on the front half parentheses, create basically capture groups. And then I am going to eliminate the line while holding the uniqueness of the number of the line. And then I’ll just grab everything else. Okay, so we’re going to start it off with line capture. Group number two had the unique number of the line. 


34:38
Jeff
As I’m typing this in, you can see down below it is showing me in real time how it is going to change. And then the uniqueness is part number three. Let me go ahead and drop that underscore out of there. Also. I can do that by just dropping it outside that capture group. Okay, rule number one applied 56 times. When I look at the changes, this is now what the client’s going to see when they’re browsing for tags. So the front part, the line part is now standardized. Now I’m going to work to standardize the second section, which is what tells me what my equipment is. Boiler filler, water main. I’m only going to do a demo here on the boilers. So I’ll actually drop everything else out. If it says Bl, it’s a boiler. And right here are boilers. 


35:27
Jeff
So let’s just go ahead and make a quick rule that says Bl is now a boiler. It’s been applied good. And now I want my boiler to be capitalized. So if it says boiler, please make it with a capital. All right? And now a tricky piece of regex. If the word boiler with a capital b is not in this, I want to eliminate it from my model. Again, I’m not doing a darn thing to it in the historical record. I’m only eliminating it in the virtual model. So we’ll exclude it. That applied 35 times. And now what I’m left with are only my boiler tags in this model. Go ahead and extend this up just a bit. I’ve got two different naming conventions I now need to get standardized. 


36:29
Jeff
And my first one is going to be these with the 00:20 the 00:20 offers no value here so I’m going to drop it out. I will capture the boiler and the unique part of the boiler and then I’m going to keep the FTPT or TT. We’ll capture those two letters and then I’m going to ignore the rest. All right, so line one is already there. So if I just bring in boiler information and then my unique tag name, we’re done. I just need to do this one more time with these tags. So we’ll do the exact same steps, but this time I’m excluding device one. In the underscore, I’m capturing the Boiler and the uniqueness. I will escape out and leave the dash and take the next two and then forget the rest. All right? And two. 


37:40
Jeff
Okay, last three lines that I need to write is I want to get the fts to read differently. In this case, my ft represents pressure, my TT represents temp. And what’s left? My PT, which was, I think in my head, I had it as steam out and there we are. We’ve standardized two completely different sets of names to look the exact same. Now I’m going to create two assets. I’m going to create a line asset and a boiler asset. I’ll do that by basically telling it to look for the dot. Anything on the front on the first part up to the dot tells me it’s a line. Now it groups them by lines and then same thing. Only this time I’m going to go the whole way to the second section. 


38:53
Jeff
I am curious if we wanted to do a bit of a poll, is to find out how many are comfortable with regular expressions. If you don’t mind in the chat, if you know Regex, give yourself a pat on the back and tell the world, let us know. Or if better yet, if someone in your organization knows Regex. I did not know it when we deployed this solution, but it only took me about 2 hours to learn it and a little bit of practical usage. See what’s happened. Our lines now represent boilers and if there’s one boiler inside of a line, it shows one. And if there’s multiple, it shows multiple. All right, we create it and we’re closed. Tracking. Okay, on time. 


39:35
Jeff
When my client comes in, they will now have an option to browse data via the historical view, which starts off with data sets and then shows me the tag path. And I got to get the whole way through all this to get to those complicated tag names. Or when my client browses and you’ll see this inside of the axiom demo, they could browse the demo, the asset model, and the first thing they see is pick the line you want to look at. Now pick the boiler, and there’s your tax. That’s a lot easier. All right, last piece. Let’s build a calculation. I’m going to go very quickly here just to demo this temp. Let’s do 60 minutes average. That’s going to be a bad idea because I haven’t logging the data for that long. Let’s do five minute average instead. 


40:23
Jeff
Pick the model, tell it which asset it should be, looking at, the boiler, and how often to run this. Let’s run it every minute. Let’s go ahead and backfill this until about 08:00 a.m. My time. And off we go. Now all I have to do is pull my function for time average. These are where all of the different functions live, and then drop my tag and my time in. We’re doing temp and we said, let’s do it every five minutes. Write this into SA rugby rules. Keep the asset. It was just a little bit. This allows us to write these tags one time, write the rule one time, and apply them to all the assets. We’ll just call it temp five min average. I can evaluate it based on my different lines, and I can see that my calculation is working. 


41:23
Jeff
I apply it, I close it and run it, and now all those calculations are being ran and they will show right back up inside of my asset model within two minutes. Okay, I’m done my admin work. I’m now ready to move off of admin and move to the client side of the presentation. So here I am at the client, and I want to see all of this data. So I’m going to use my axiom and pull up log into my axiom client. And axiom again is our HTML based web browser application. So if you’re in a web browser and you have access to the server, you’ll come in, you’ll type in the address and log in with your credentials. My credentials have been remembered already, so it didn’t prompt me, and I’m going to build a brand new application. 


42:18
Jeff
So what you’re looking at is our design window is currently open. This is the dashboard that I can place my controls or my widgets on, I’ll make it a little bit larger. And over here on the left is the design portion. I can toggle that on and off by clicking the edit tool. And the only thing that I really want to do on this design is build a report that shows me my boilers. I’ve built them in an asset model. So I’ll use my asset Template and I’ll associate the template with a boiler. And then what this does is auto discovers the number of boilers in my model. It gives a card for each boiler. And whatever I do one card automatically gets designed on all the others. So if I make this card larger, get larger. 


43:15
Jeff
If I pull a trend graph onto this card, all the other cards get a trend graph, too. Let’s do it like about that size, and we’ll go ahead and make our card a bit smaller. I’ve got a trend graph here. On the trend graph, I’m going to go ahead and place all the tags from my boiler. And then to the left, I want to put a little grid. My grid is just like a table. I’m going to run two columns on it and we’ll have all three tags, live values. I don’t have to use just live values. I could do things like here, we’ll do temp first. I could do historical values. What was the value for all of last week? On average, perhaps, but for this demo, I don’t have a lot of data backwards, so I’m just going to show live values. 


44:21
Jeff
All right. And last but not least, our pressure tag. I’m just doing a control copy because it allows me to move it a little faster. And I think the last piece, oh, look, my temperature, five minute average showed up. So let’s put a doughnut gauge here at the bottom of the grid. I’m not going to fool with putting a label on it, but let’s take this doughnut gauge and let’s change the scale. We don’t want that to ever be higher than 220. So we’re going to set a scale of 200 to 240. We’re going to set a high limit at 220. And if it gets there, we’re going to go red. Now let’s assign a tag to it. There it is. Okay, I’m done my editing. I’m going to go ahead and save this. I’ll save it publicly. Call it boiler profile. 


45:27
Jeff
You can see I’ve built this before. I’m saving over it. And what I’ve done is designed one time. Let’s make this the last 15 minutes and put it in live mode. There we go. Save my changes. I’ve designed this one time and every boiler that’s in my factory is now being shown. How awesome is that? This works the same. And there’s a great video on our YouTube channel or in our blog you can watch where we did this. With 8000 assets, it doesn’t bog down based on size. Now, if you wanted to find only the boilers whose temp is outside of spec greater than 220, I just put a filter on it and it immediately finds that line three. Boiler one is outside of that spec. Notice that all my other boilers have disappeared. 


46:25
Jeff
Let’s go ahead and set this to automatically refresh with that rule every 1 minute. And you know what? We don’t need that rule up there. So let’s hide it out of sight and we’ll save it again. Because now that I’ve done this, I can take this URL, copy it, come back into my ignition project, open up my designer. Let’s just drop a web browser in here and if we close our tag list, I can drop that URL right here. And now my axiom report is going to show up inside of ignition. And there it is. Also, I promised to show you how we’d automate this report. If I want this report to be in my team’s inbox every morning at 07:00 a.m.. I just use the automated report feature and I come to scheduler and I build it out. Show me the boiler profile. 


47:39
Jeff
Send it every morning, Monday through Friday, and 08:00 a.m. Is when I want it delivered. Show the report. Here’s where I put the email for my email group and I could even tell it to include the image of the report as an attachment. So if someone doesn’t have availability to the historian server, they still have access to be able to see the report. And that’s all I would do. Now it’s going to run. So that will bring me to the end of our live demo. We’ve shown the integration. Additionally, I should mention we can also take this configuration and turn around and provide canary as a history provider to ignition. 


48:26
Jeff
So that if I wanted to use a ignition, say easy chart or a sparkline or whatever, I can actually browse the canary historian and the views that I’ve built these asset models and I could provide those same tags to an ignition instance. So we can not only pull tags out of ignition, but we can provide them right back to ignition controls or embed our clients inside of it. All right, I hope that you are able to follow everything. I know I went pretty fast, but we built an entire project and we did it inside of our 25 minutes window. I’m really pleased that were able to show you that. So let’s thanks, Jeff over to my budy Lenny to talk the business model. 


49:24
Lenny
Cool. Just before we do that, Jeff, we’re quickly going to run that poll you suggested. So you guys would notice you’ve got a poll there just to get a little bit of feedback on how many of you guys actually know regular expressions or anybody in your organization. So you’ll see that poll pop up right now and you can have a vote and we’ll share the results a bit later. All right, so the Canary business model, who will host the data? So yeah, it’s two things. They come in a perpetual as well as a subscription model. So you can host the data yourself on a server that you install, or you can get Canary to host a server for you in the cloud. Minimum tags that you can select is 100. 


50:10
Lenny
And if you tell Jeff, I would like to have 105 tags hosted by you literally just type it in and you can use the calculators that’s on the website. You can go and totally create a custom price of the exact amount of tags that you require as well as the exact amount of concurrent axiom clients that you require for this solution. And the calculator would obviously go and populate that pricing for you right there in the calculator just to iterate. So the minimum or the base solution is 100 tags with one concurrent axiom license that is purchased perpetually for $4,000 or on a subscription model of $135 per month, we can also just as an example, so if you need 1000 tags with one concurrent axiom view, that will set you back about $5,350 or $180 per month on the subscription view. 


51:09
Lenny
If we scale that up to 5000 tags and three concurrent license, it will work out to $14,130 or at $480 per month. And just to get an idea of if you scale it up even further, so 20,000 tags will set you back around $50,000 and 1500 per month. And you can also go unlimited. So if you need unlimited amount of tags with 20 concurrent axiom clients, that will be around $90,000 upfront or 3000 rough, give a few cents there. $3,000 per month on the subscription model. Cool. So obviously you can go and create those custom quotes and you can request as well. 


51:58
Lenny
We will obviously be able to give you that quote in a rand price as well, and we can go and also upgrade your enterprise or your solution to an enterprise licensing agreement, and that will be able to take your current solution and scale it up to enterprising to have an unlimited number of canary systems with obviously all licensing that we have removed. All right, it’s very easy to get started so you can request a download of the solution. We can provide you with a 90 day off trial licensing to get it up and running. Then obviously if you already got your solution up and running, the only thing this is to license it so you would purchase it with obviously a much fractured cost from other historian solutions that are out there. 


52:49
Lenny
And then obviously the third step is to deploy it and train your employees. And it comes with three years worth of support from the initial purchase date. All right, so Jeff, I’ve got a few Q as here already. So the first one from Vusi here is he wants to know what is the main difference between the ignition historian and the Canary historian. Cool. 


53:19
Jeff
You want to take it, Lennon? You want me to handle it? 


53:22
Lenny
You can take it. That’s fine. 


53:24
Jeff
So ignition’s historian, like so many other SCADA historians, is an SQL database. So there is no real ignition historian. Ignition has a module that we use as well to interface with the database you’re going to archive to. But all you really do for the ignition historian is install SQL database of your choice and point the historian module to that database. You’ll still need to manage the database. And ignition by default is going to interpolate the data that it sends to the database in SQL to help try to manage the size and the performance constraints of an SQL time series database. We’ve seen it time and time again, and it’s not ignition only. It’s all the SQL databases for a couple hundred tags for a couple of months or a couple years usually no problem. 


54:22
Jeff
You start doing thousands of tags at high resolution and want to have a lot of clients access that data. You’re going to run into performance issues. Cool. 


54:32
Lenny
Thanks, Jeff. Bruce, I hope that answered your question. I think a very similar type of question here from Henny is does ignition have a native historian? So Henny, as Jeff explained, they have a solution which stores it into a SQL database. So technically not a native historian, but they use SQL technology to store their database transaction or their tag information into that as time series data. Jeff, then we have another question here from anonymous attendee. Are we able to use Windows authentication for logging into axiom? 


55:09
Jeff
Yes, in fact, all of canary, great question. All of Canary is built on the net platform in fact, we use WCF technologies as we move the data, and we use active directory as our primary option for all things security for both the admin and the client side. So we can actually restrict down to the tag level what your clients see using active directory. If you’re not using active directory, we still have the ability to username and password and do authentication and security that way. 


55:45
Lenny
Cool, perfect. Thanks, Jeff. All right, guys, that’s almost the end. Here’s another question just popping in from Rob. Would you need a full second license for corporate historian on top of the site? Specific historians? 


56:00
Jeff
Yes, we license by server, and so, yes, you would need two license there. 


56:07
Lenny
And just to iterate that, Rob, there’s, there are two options. You can either go for the mirrored approach to get the data across, or you can dual log from the site to that corporate historian as well. Another question here from Henny. Do you have build a virtual viewer? Can you use the native tags as it is? 


56:29
Jeff
Go ahead. We’re answering it the same way. We’re just doing it differently. So, yes, you can just browse native tags. You do not have to build virtual views. Probably 50% of our customers don’t build virtual views. But you know what, it’s a lot like data quality. If you do the work up front, everything else downriver of what you’ve done gets more simple. So we would encourage you to try to go that route. Sorry, Lenny. 


56:56
Lenny
No, that’s fine. So, Henny, just on that as well. If your tag structure is nicely structured and it’s got the dots into it, Canary will obviously by default, see that dot? And it will break it down by that dot structure in the tag name. So, yeah, if you already got some very good standard, then yes, then you can get away with it without using the views. I think the last question here, then the rest we will answer via correspondence, is a question from Durain. What is the fastest theoretical rate one could retags from the OPC server. So that’s if you use the logger, I presume? The OPC da logger. 


57:34
Jeff
Jeff, so, quick answer. Ten milliseconds is the fastest I’ve ever seen achieved. Cool. It was all Alan Bradley and there wasn’t really any specialized equipment. They were doing 24/7 polling at ten milliseconds on over 100 tags, and I’ve watched it happen live, so that’s the fastest I’ve ever seen. Cool. 


58:01
Lenny
Perfect. Well, guys, this just quickly, the results of your poll. So, Jeff, there you can see not a lot knows Regix, but it’s a very powerful tool. And I agree. If you get your head around it, those virtual views are a massive, especially in a brownfields implementation or if you’re starting to get edge devices to get your digital transformation project started, it is a very powerful tool to go and get yourself up and ready and actually use those virtual views. 


58:35
Jeff
And Lenny, I think we should point out that we are actually going from canary side. We are going to start teaching free regex courses for training just so that folks that want to learn it, can learn it. And I promise you, it’s a lot easier than it looks. 


58:50
Lenny
Perfect. Thanks, Jeff. All right, guys, I think, thank you very much. That’s a little bit over the time, but I really appreciate everybody joining this afternoon. Jeff, as always, top notch. Thanks a lot. If you have any questions, if you want to inquire about the 90 day trial, you can contact us at element eight. You can send a mail to Yaku or to myself if you guys have my email address. And then we will help you to set up that 90 day trial would be not a problem. Also, keep on the lookout for our next webinar sessions. Clara Reese will send them out soon. We still have two webinars scheduled for lockdown. Hopefully a lockdown is over by then, but keep on the lookout for that new webinar session that we’ll be hosting from element eight. Jay, thanks very much. 


59:40
Jeff
Thank you, everyone. 


59:41
Lenny
Enjoy the rest of your day. Cool. 

You might also like