- 26 min
SPEAKERS:
Introduction
Some companies deal with an ever-changing landscape of devices and tags. Whether it be assets in the field or entire plants that come online, if you can automate the discovery of new tags and assets, you will dramatically decrease your workload while providing faster data access to your client tools. Ken shows us how to achieve this with Canary quickly.
Transcript
00:00
So my next topic that I’m going to talk about is automating asset discovery. So this is actually kind of near and dear to my heart. This utilizes what we call our view service. I actually wrote a lot of the view service. I love working in the view service and I’m really proud of what were able to do with some of our software here. And a little side note, I try to the world record for the least number of slides during a presentation. So my last one was three, this one is two. I’m not big on slides. I love a discussion. This time I’ll actually be demoing some software so I don’t need to run a lot of slides in. So as Clark had mentioned before, there’s kind of three parts.
00:47
We want you to collect the data, store it, we want to contextualize it, and then we want you to maximize your operations. So if you start from left to right, we have our data collectors through our store and forward. I talked about that just a little bit in the last session, of course, that writes to the Canary historian. And then we get into, oh, he took my laser pointer already. Can I have it back? Thank you. So once it gets to the historian, well, that’s great. That’s where everything’s sitting on disk. That’s where we’re persisting it. But this views module, or view service, is where we add all the contextualization. This is where all the magic happens. Anything that has to do with a data request comes through our view service.
01:38
So whether that’s our own axiom, visualization tool, Excel add in, ODBC API queries that comes through our view service. Our view service is kind of like our gatekeeper. All our security is done there, and nothing can talk to the historian directly without coming through the view service. As I say that. Let me switch over. I’m going to be jumping around here just a bit. And so I hope you can see, boy, this is tough. It’s not on my screen here. Okay, so here’s actually our administrator. We didn’t show a lot of software this morning when we’re doing our overview. So this is our administrator here. You can see we have a tile for our historian. So our historian is not just a big blob where we throw everything in. We actually try to organize it and make it logical.
02:36
So in this case, you can see I have kind of 18 tiles within my historian. We call these data sets. Data sets is the first level of segmentation within the historian. Typically, a data set relates to a logging source. So if I have some OPC logging sessions set up, and I have site A and site B. I’m going to run site a to one data set. I’m going to run site b to another data set. And so this is just a way for us to organize. Data sets are actually nothing more than a folder structure on the file system. So in this case, you can see I actually have some data sets that are active. I have some that are showing that there are some writers, which means data is actively being logged into these data sets.
03:25
I have other data sets that are just kind of stagnant, and I don’t have any current data coming into them. Now, these data sets can be inputs into what we call views or virtual views or asset models. We tend to throw around multiple names for them. But for those of you that come from a SQL background, you can think of our historian as being SQL tables, where the raw storage is and nrviews are pretty akin to SQL views. You are not storing any additional data. You’re just changing the virtual representation of that or changing what data is coming back during a query. So in this case, I’ve created a few asset models or virtual views on my system, and I hope you can read this.
04:18
And if you look at some of the properties on the left over here, I can see that the input to this virtual view tells me that it’s using a data set called Katerina and a data set called carns. And if I look back at my historian, I can see here I have a data set called Katerina that has roughly 15,000 tags in it. I have a data set called Carnes that has just over 17,000 tags in it. Now, if I take a look at what is in those data sets, I’m going to switch here quick. This is really tough without it showing on my laptop. I’m going to jump over here. This is axiom. I’m in our kind of trending component here, and if I do add trends, I can see those same views that I could see in my admin.
05:08
I can see those are all available inside axiom at this point. Now, I have a funny machine name called it stay away. And so that’s really my historian name. And inside that I can see here’s my cat arena. And there’s not a lot of structure here. I do have one branch, but most of my tags are sitting out here at a very flat, long list of tags. So I don’t really know a whole lot about these tags at this point. I can discern a little bit. Again, as I said before, I work in oil and gas. So this is all based on what I work with. So these are actually oil and gas tags. So I know that this briggs five h is a. I know it’s a well. And then I can see that there’s some readings underneath that.
05:58
But I don’t know a whole lot else about this other than what the well name is. And if I look at Carnes, it’s pretty similar. I can see there is maybe a little bit of structure here. But if I go into this, there’s not really a whole lot. I see one tag, even though there’s a branch here. So part of what the views service can do is we can take these flat structures, these things that don’t have a lot of context around them, and through properties that could be attached to the tags, or metadata is another word that you could use. Through properties and metadata, maybe I can build out something that’s a little more descriptive and usable to the end user. So if I go back to my administrator and I look at my views.
06:47
So I’ve built a model here based on Katerina and Karns, and I’ve told it that I want to use these following properties. I want to use a property called field route well pad, canary well name and a description. And so what happens is, as Clark said before, we have a whole kind of rules engine. I’m just going to show you this real quick, because I don’t want to overwhelm people, and they throw up on their hands and go, oh, I don’t understand. So we use a regular expression engine, regex for short. Not everyone knows Regex, and we understand that. But just with a handful of roles here, I’ve built a model that maybe better describes that data that I showed you that was extremely flat. So I don’t want to go and teach you what Regex syntax is.
07:36
That would be a different session in a different class. But I do want to show you that if I drill into this model now, I can see, okay, well, here’s my Katerina and my karns. And I can see that within this model, I’ve called these a field. So now I have some asset types that are starting to appear. And if I drill into my field now, I have some other hierarchy that we are now calling these routes. So instead of a flat structure, I can see, okay, now I’m starting to build some hierarchy out here. If I go into KBS, well, now I can see I actually have 24 of something sitting under KBS, and these somethings are called pads or well pads. And so if I go back to axiom quickly, so now I don’t want to look in it. Stay away.
08:29
I now want to look in my oil and gas model, where I was just looking. And now I can see I have some real structure here that’s going to explain and maybe provide some more context to the data that I have there. And so if I go all the way into herb one h, now, I can see all the different sensor readings that are available for that well, same as if I would go down to. If I pick another well pad, I can see that this one has a few more, but it’s basically the same tag, Zillow. So I can see that I have some consistency from asset to asset. Part of our view service is we don’t require those assets to be completely uniform. We understand that different pieces of equipment, different processes bring different tag counts, and that’s okay.
09:20
And so our software is very tolerant about tags, either existing or not existing under assets. And so this is great for maybe an engineer that understands the geographic layout of what I’ve described here for my wells. But what about if I have a business user that has to pay the royalties to the owner of the land based on how much oil they’re pulling out of the ground? Those accounting people don’t understand the geographic layout. They don’t care what the geographic layout is. They think about the data differently. And so what our view service allows you to do is take a historian tag, and you can present it in as many models or as many different ways as you need to satisfy your internal users needs. So in this case, and this is real live information that we’ve built for a customer, actually.
10:24
So in this case, we’ve created, I called it really original name here. It’s oil and gas alternative, or alt. So it’s using the same Catarina Karn’s input, but we’re using different properties now. And because we’re going to use different properties, that’s going to completely reshape what that structure looks like. And so if I drill into this now, well, now I see something that’s certainly not a field name or a route name or a well name. This is actually their accounting term, or they call this a unique well identifier. And this is what accounting uses to pay the landowners their royalties for the gas or oil that’s extracted from their land. And so accounting now can use our software, they can come in here and they can look at various readings.
11:16
Or if we had all the sales readings that were coming off the meters, they could discern exactly what they needed to just because we’ve reshaped the data, it’s the same underlying data, but it’s reshaped and presented in a different way. And so if I look at that in axiom, if I come back over here, I choose my oil and gas alt. Now, I can see, I think there’s 700 and some wells on this system. And so it’s the same information just presented a little differently. So that’s not really the core of what this was about. But I needed to give you a little bit of background on how we structure things in the historian, what’s capable in views. And so now I want to move on to what the topic really is about asset discovery.
12:12
So, as I said earlier, each one of our data sets kind of represents a logging source. That’s how we try to structure the historian. So, in this case, I have a data set called canary oil that’s already coming into the system. I can see there’s 1230 tags that we’re writing to, and if I jump back to axiom, I’ve actually built just a very basic display to represent some of these tags. So, again, I’ve created a well. And so I have a list of wells, and inside my asset model, I have 43 of them. And in this case, I’m using a specialized control within axiom called asset template. And so I’m kind of paging these. So I’m displaying the first 15. I can easily go see the next 15, and finally, I would see the final 13.
13:17
What our asset templating within axiom allows you to do, though, is I think Gary kind of touched on this in his presentation. This is great. I can see all 43 of these. I don’t really care about the ones that are performing well. Right. I’m more concerned about ones that meet a certain condition or ones that need my attention, or ones that aren’t producing anything and therefore aren’t making me money. Okay, so what we allow you to do is we call it a filter expression. You can supply a filter to your assets to narrow down and maybe give an end user an idea of ones that need attention. So, in this case, I’ve already predefined a couple of things, so maybe I know that if the pressure of my well drops too low, that’s going to impact production.
14:12
So I’ve set up a little button here that if I choose that button, you can see an expression filled in here. And now, instead of 43 assets, I only have eight that I have to look at. And so in the US, we call this operate by exception. Don’t make me go through and look at all the 90% that are good. Can you show me the 10% that are bad? And so in this case, I wrote an expression that said, if my pressure is under 140, those are the only ones I don’t want to see, and I want to sort it by the lowest pressure. So I can see that this 70.98 is my lowest one, and therefore first in the list.
14:55
Similarly, if I was concerned about high temperatures, if I choose high temperature, odly enough, I still have eight assets, but I can see I’ve sorted by temperature at this point, temperature descending. So here’s my highest ones, all recording 200. Now I have a 196, so forth. So I’m looking for ones greater than 100. If I really want to make money, show me the ones that aren’t producing anything. And unfortunately, 37 of my wells are not producing much right now. We built this over time. We built this because we sit with our clients, we watch them do their daily jobs, and we’re like, how can we make that better?
15:40
And that’s really how a lot of our modeling functionality, how our asset templating within axiom, that’s how that was born, was out of watching users perform their job and trying to come up with a software solution to make their job easier. And so I can go back and set this back to a default view. And there’s my 43 once again. But this is kind of just a summary screen of what’s going on. Maybe I need to go from my summary screen to a detailed screen to really investigate a little further. So in this case, we’ve actually created kind of a hyperlink off of our label here. And if I choose this hyperlink, I simply have an instruction to tell it to go to my detailed screen.
16:32
So my detailed screen now I have some trending, I have some history, or I have three days of data for various points on this. Well, I have a scatter chart here showing me a correlation between two different pressures. And I can see my production over the last week. And so it’s been fairly consistent, but it hasn’t been great. This is all simulation data, so don’t read into it a whole lot. But from my summary screen, which I can return to here, so I can get a quick overview of my process, but then I can launch into something that’s going to show me more details of my process.
17:18
And so this is a quite common use case that we use in the US when we’re working with customers, we want to give them that kind of summary glance, give them some conditions that are going to help them solve problems or at least identify assets that need some attention. And then how can we help you drill in, really analyze and see what’s going on behind the scenes? So let me go back to my default view here, and I’m going to fire up another program here that’s going to generate a little bit more simulation data. So I have some new sites that are coming online and I have some tags that I’m going to suddenly start generating. And if we look over here, I already have a data set created called Canary oil elevate.
18:12
And right now, obviously there’s no writers, but the idea is if I come up here and tell it to suddenly start logging, I should see some feedback down here saying that we’ve started. If I look over here at the historian, I can see, oh, I’ve started writing another 1230 tags. And if I look in my messages, excuse me, I can see that’s there’s the messages I was waiting for. So I can see that it’s reporting that my logging has started. I have some updates that have come into it. And there we go. So now I’m getting some messages reporting that within my historian, one of my data sets has changed. And then I report that canary oil elevate. It has changed. What’s that mean? A change to us means that tags have been added or metadata or properties have changed.
19:22
And why that’s important is obviously if new tags come in, that could impact a model that I have built. If metadata has changed, that could impact the model I have built. If I have moved a well from one route to another, I need to reflect that in the browse tree appropriately. And so what happens is, since we started logging new data, went ahead a lot of what we showed here. If I go back to my browse tree, this browse tree is in memory inside our service so that it’s very responsive. You have 1020 hundreds of clients coming in. We want that to be very responsive and get back the information that they need. And so this message is telling me that canary oil elevate has been added to our memory cache. And so if I was to refresh my it, stay away.
20:18
And I look, I can see Canary oil elevate is now in the browse tree and showing the tags that it has come in that have come into the system. And so based on that refreshing its cache, it has now essentially tooled all of our views. So if I go to the highest level on views, the fact that data set has changed, it’s notified all these views that, hey, this new data set has arrived and now has tags in it and that has caused our Canary oil model to rebuild. Canary Oil said, I’m interested in that data set and therefore I should run my roles through the roles engine to see if that impacts my structure. And so just as an axiom, sorry for jumping here, I know I had 43 assets.
21:14
If I quickly look at my views service and I look at my Canary oil model, that’s the basis for that screen, I can see I now have 86 wells instead of 43. And if I come back, oops, wrong browser, if I come back and maybe reload my application here, so instead of 43, I’m going to end up with a magic number of 86. All of a sudden, I didn’t do any maintenance to the system. I started some new logging session. I didn’t have to create my tags in the historian, I didn’t have to go adjust my model. Now, of course, our models are based on, are the tags being normalized? Do I have consistent naming conventions that are being done?
22:04
Do I have the appropriate metadata already in place if my model is based on metadata, so things can’t magically appear if they’re not following a convention or they’re not normalized. Our modeling is very, I don’t want to say picky, but it is based on rules of pattern matching. Normalization, of course, plays into that. The tags that I started logging met my standard on how I wanted to log tags, and therefore those assets automatically appear and come into my model then. And so if I was to now look at a certain condition, say pressure low, I don’t have eight or nine anymore. I now have 21 that I need to address. And by the way, I tried to use some cows that are native to South Africa.
22:53
I don’t know what some of these names are or what variety they are, so I don’t know what a Drakensberger is or africanar, but I do know what a guernsey is and a longhorn is. I do know a little bit of German, so I might be able to translate that. I don’t know. Yeah, absolutely. Yeah. So axiom is reading from our historian, but the logging source could have been from any PlC or any data source or from ignition using our module. In this case, we have a data simulation tool. So I just use data simulation to push some new tags in. But everything in Axiom is going to be reading from our historian now. So we’re not aware of other systems. Now, axiom does have a whole scripting engine behind the scenes, just as ignition does.
24:05
That’s not to say I haven’t seen some guys pull in some SQL data and present it on our screens, but natively, we only read from our historian and the views that you’ve created in the system. Thank you for that. So that’s kind of asset automatic discovery. We’ve worked really hard to make our system as maintenance free as possible. Not everything is completely maintenance free, of course, but operationally, when you have your data sets in place and you have your models in place, if you have an operation where new tags are arriving, we try really hard to make that hands off and maintenance free. Obviously, there’s a lot of applications where it’s a static tag list. The tag list doesn’t change for ten years. It’s a manufacturing facility.
25:10
Here’s what I have, and we’re not changing it, but there are certain industries where things are very dynamic, where they’re coming and going, and those are some of the markets that we do really well in. And so we’ve tried to build that into our software to account for that and to be able to capture that. Trying to think. If there’s anything else I wanted to say. I don’t even have any notes in my PowerPoint, my two slide PowerPoint. So any additional questions? Okay, thank you.