Introduction
In this session, the team takes us through Canary 25—a major step forward for the platform and a roadmap packed with powerful new features.
You’ll explore how enhanced backend support and smarter data handling set the stage for better performance across the board. We’ll look at upcoming advancements in Calcs & Events, including long-awaited batch support, and dive into improved usability updates coming to Axiom and Views.
The presentation also highlights strengthened reliability and modernised authentication—ensuring Canary continues to deliver the stability, security, and trust your operations rely on.
If you want a clear, future-focused look at where the Canary platform is headed, this session has you covered.
SPEAKERS:
Transcript
00:05
Speaker 1
A lot of times, when people first learn about Canary or they see me for the first time, they’ll say, Oh, Canary is just the historian, right? And they mean well, because sure, we certainly are the data historian, but we refer to it as the Canary system because it is much more than just a data repository. You can do so much more than just store the data. We had our user conference at Canary back in August, and as we mentioned, we’re celebrating 40 years. The company was founded by two brothers. They still serve as our president and vice president. And one of them mentioned that in 1985, being data-driven, making data-driven decisions, wasn’t as popular and as important and as applicable as it is today.
00:48
Speaker 1
And so, really being efficient and helping companies make data-driven decisions is at the core of what we do. And we also want to offer direct integration. You see all these amazing tools coming down the line. We want to be able to connect to those tools. So at Canary, we sort of break it down into three steps. You have to collect and store the data, assign context to the data to make it relevant, and then ultimately get it into the other tools, into the hands of the operators who need the data to make better decisions. We’re going to talk about what’s new in version 25. Wanted to add some context around how we got there, because just diving into new features if you’re not familiar with the initial platform might be a little confusing. So, I just want to tell you what we’re built upon.
01:38
Speaker 1
We believe that a platform that is open now, we are a proprietary database, but we’re open in the sense that we want to break free from the platform lock that some other vendors try to put you in. And the way that we do that is we do not license our data collectors individually. So every Canary system includes every collection protocol that we use. We want to be open in that way. We also want to be secure. Data security is of the utmost importance, given the types of environments we’re installed in. The US Navy’s been a longtime customer of Canary. You think about those types of environments, you know, securities of the utmost importance. And we gotta be adaptable. That means adaptable when it comes to how a project can roll out and scale, adaptable in terms of architectures that we can serve.
02:29
Speaker 1
But even on the licensing side, if it doesn’t make sense from a monetary standpoint, it’s not going to scale to the full enterprise. And that’s how we view ourselves as a true enterprise data historian. So we have perpetual licensing, which, as many of you know, isn’t as relevant or as popular in the market as it used to be. We’re proud to always offer perpetual licensing. We have a subscription method as well, if that’s what companies want to do, and a SaaS offering. So what we try and do is serve the needs of our clients. Now we are a NoSQL time series database. So our calling card, our secret sauce, so to speak, will be our lossless compression algorithm.
03:12
Speaker 1
So you never need to cut the data values; you always have access to the raw data values as long as you need them in the Historian. Highly scalable. Ken mentioned during the Fireside chat some of the high tag counts on a single server that we offer. Of course, there’s a lot of, you know, caveats with some of these larger instances, but we can scale to millions and millions of tags and then be flexible. As these enterprise solutions are getting more and more complex, the types of architectures that they require are also more complex. But with Canary, we make it easy to fit into those environments. Now we’ll get to the Historian, just the Historian. It really is the core of the Canary system. Of course, everything is delivered from the Historian, really. Historians, just as you know, a collection of tags, timestamps, values, qualities, and metadata.
04:12
Speaker 1
We want to make it easy to create that basic level, but then scale beyond that. So the tags get grouped into data sets, just collections of, you know, similar tags. And when the client is requesting data from the Historian, it’s really just asking for that collection of TVQs and then the views. This is where, you know, it gets a little more creative, being able to, you know, quickly find and organise and interpret the data that matters most to the organisation. So we do have the Canary View service, and that’s what a lot of the new features and things are kind of built around the views and how you can interact with the data that way.
04:54
Speaker 1
And as you see on the chart here, this is just in the blue boxes, the Windows services that Canary is based upon and how the data is flowing through the Canary system. And whether it’s a third-party tool or our own tools, such as Axiom, we have a trending, dashboarding, reporting tool that’s Axiom. It’s all about reading the data from the VUE service rather than going into the Historian itself. So it’s that single endpoint which helps to make the architecture scale so well within the Views. You certainly do have the Historian view. So the way the data looks coming off the PLCs, the devices, the raw tag names. But system administrators can get creative in restructuring, realising the tag names, and this is where you can start to assign more context to the data.
05:47
Speaker 1
So that’s a quick overview of kind of, if you’re not familiar with Canary, just wanted to give it a quick rundown for those in the room. But now the future of the system and where we’re headed, where we’re going, it’s really leaning into this enhanced backend support and data handling. We’re getting into more and more complex architectures, more and more enterprise solutions, and Canary is built to withstand those types of environments. A lot of that is centered also around calculations and events, which we’ll touch on. Improve usability with Axiom with the VUE service, as well as some other features that we’ll go into in more detail. So Ken’s going to tell you more about how VUE works. Just wanted to lay the groundwork and kind of set the scene for you that way.
06:33
Speaker 2
Sure.
06:34
Speaker 1
Thanks.
06:35
Speaker 2
Glad to be back. This is my third year here, so. And I feel like I talk about Views every year, so maybe it’s kind of my baby, I don’t know. So the Views is really the gatekeeper for all data, so any requests, as Kyle said, are coming through Views. It actually does a lot of the hard calculations that we do too. So when you make a request, maybe you just want the raw data that’s easy, that’s a straight pass through. Maybe you want what we call processed or aggregated data instead of having to have another historical tag that is your one-hour average. Views can actually do that computation on the fly, so you can make a request set of tags over the last month. What’s my one-day average? And Views does all that quick computation, spits out the result.
07:20
Speaker 2
It’s built over GRPC, so we have good, you know, security from end to end. But Views is really a part of the system that isn’t utilised heavily enough, unfortunately. But Views unlock a lot of functionality and usability. On down the road, we’ll talk about calculations. Won’t talk a whole lot about Axiom today, but we have all kinds of controls for Axiom that are built around Assets and things that you can do for asset monitoring, reporting around assets, things like that. But sometimes a lot of what we do in Views has to be supplemented with what we call tag properties or metadata. So just as we have the tags in the Historian, those tags can be decorated with pieces of metadata, and that metadata can then be used to help construct new browse structures and build additional context around your tags.
08:17
Speaker 2
We know that you know the tag paths coming from all the different data sources; unfortunately, there’s. They’re not always normalised, and there are not always good standards. Maybe you have a standard, but you acquire another company, and they have a different standard. How are you going to unify that data so that, you know, advanced models, predictive models, anything like that? How is that going to take advantage when you have different naming conventions? So views can be that layer where you do that normalisation. Ideally, that’s done at the edge. I know you guys are obviously big Ignition users and using UDTs, that’s great. We love it when everything flows into the system. It’s already well described, but the reality is that’s not always the case. So views, the way it’s constructed is it’s actually a series of regular expression roles.
09:09
Speaker 2
So you choose where your inputs are, and then once you have those inputs, you would start constructing a series of roles that are executed in series. Maybe you need to do some transformations, some aliasing, some renaming. And. And this regex piece is the part that we found was prohibitive of people kind of adopting it. So we found ourselves having to assist many end users on how to actually build these out. So, having views out now, I think we built this in 2018, 2019 as a module. So taking a look back and seeing the patterns that were repeating, we decided to come up with some kind of custom functions that could help describe our models, help do some of the manual work that we have to do inside our models, but do it without Regex.
10:00
Speaker 2
So hopefully I can jump over to demo mode here, and I’m going to go into my views tile here, and I already have a number of views built out on my machine, but I’m going to go ahead and construct one from scratch here. So I’m going to create a new view, and this is actually a real-life view that I helped an end user create. So they deploy multiple solar fields PV sites. Alongside those PV sites, they have battery storage, and for warranty work, they need to share this battery storage information with all their racks and cells; they need to share that information out. So they didn’t want to expose all their tags to the vendor that needs to consume this data. So a view allows us to narrow the scope of what we wanted to expose.
10:54
Speaker 2
And so let me go ahead and put in what my inputs are. So I have two data sets, so I would have two distinct data sources coming into my historian. And so I’m going to choose both of those sites. We have a site called Martinsburg, and we have a site called Roaring Spring. Now, in this case, the data is coming from Ignition. It’s great. It’s all based on UDTs, it’s all normalised, it’s all extremely consistent. These were greenfield projects. So I don’t need to supplement this model with any metadata or properties. This is where I would add that information in so that I can then use it later on in the model. So if I say, okay, we do a bit of thinking, and I can see, I brought in, wow, my mouse is really touchy.
11:45
Speaker 2
This is going to be fun. My laptop does this sometimes when I’m not plugged in. Okay. So if I expand this out, I’ve done nothing. So it’s a direct pass-through of what the tag name looks like in the Historian. But I can see the organisation here. So we have a process that we call battery storage. We can see the best systems, we can then see batteries. So what I want to start doing now is defining or narrowing the scope of what I want in this model. For this case, we only wanted tag storage. So I’m going to create what we call a model role, which means we’re doing some sort of transformation, or in this case, we’re going to do an exclusion. We want to kick these out. So I’m going to say tag name does not contain battery storage.
12:43
Speaker 2
And as I type this, we try to auto-suggest or find the first tag that matches that pattern. So I can clearly see that I found something. And if I say exclude, I can see, okay, I’ve kicked out some 3,000 tags out of my model here that aren’t part of the scope for what we want to share. So now that I’ve done that, I actually don’t need to do any aliasing; I don’t have to do anything else. So I’m going to go right into applying asset types now. So I can say add. Now this is going to be an asset-type role. And in this case, all I have to do is say branch number one, that’s going to equate to a site. And if I say, okay, quickly, I get feedback, I could see it matched the remaining 40,000 tags.
13:32
Speaker 2
I have two sites, I have two instances, and they happen to be the names of my data sets, Martinsburg and Roaring Spring. I’m going to continue to work down through the hierarchy. So I’m going to say Branch 2. And that’s going to be, that’s going to equate to my process. Now, since I’m only bringing in my battery storage tags, and I’ve kind of kicked out Substation and some other sub-processes that they have. Of course, each one of these is going to still equate to battery storage. So I’m going to continue down through my hierarchy now. Levels 1, 2 and 3. So I’m going to say branch 3. There’s only a single piece of equipment where, you know, each level, the hierarchy only represents a single asset type. So I don’t have to do anything really special yet.
14:28
Speaker 2
So I can just quickly apply level three. Now, once I get to level four of my heart hierarchy, I’m actually going to have multiple types of equipment. So as I drill in here and I expand, you know best number one, I can see I have batteries. Batteries have lots and lots and lots of tags. If I cheat and scroll down here, I’m eventually going to get to other pieces of equipment, maybe not. It’s a really long list. Oh, there I can see I have BMS. So yeah, we’ll continue on. So now I want to add another asset type. We’re going to say branch number four, but I need to kind of qualify it now. So now I’m going to say branch contains. And now I want to do a four and I want to say a battery.
15:26
Speaker 2
And these are, of course, going to be called batteries. And hopefully I got my syntax right here, and I did not. This is the joy of doing live demos. And I don’t have my glasses on either. I’m getting old. So this is. Contains. There we go. And now it auto-filled with a match. So now if we do that, we can see I have 12 batteries across all my assets or across my sites. And there are other pieces of equipment at level four that I could define. But in the presence of time, I’m just going to jump to one more rule. I’m going to say the branch contains five. And I’m looking for my racks that are part of the batteries. And apply that quickly. And we can see that they have 360 racks present inside their model.
16:31
Speaker 2
Now I didn’t do anything to define a template for a rack. We like to say we derive those templates. And so I can take a peek at what that looks like. I can say asset definition, and I can choose my rack. And I can see there are 360 instances with 103 tags, distinct tags across all those instances. Now, in this case, since everything is so nice and uniform, because that always happens, right, I can actually see my coverage column that every tag or every instance actually has every tag we know that’s not the norm. We know there’s wild variation from asset to asset. The system is very forgiving, and we understand that, you know, that assets vary. And so we have all kinds of protections and logic built in behind the scenes when we’re doing asset processing. And tags don’t exist on those instances.
17:26
Speaker 2
You know, calculations will just skip that instance altogether because obviously we can’t calculate on something that’s not there. And even in displays and things like that, we understand that there’s high variation between assets. So that’s some of the new features. So if I would hit Create here, it would go off, it would create my asset model. And I created an asset model without actually having to write any regex for once. So hopefully that helps drive some adoption. We have, you know, certainly have a lot of systems alongside Ignition, a lot of systems receiving data through MQTT where everything’s already well explained, well derived. You do that, all that work at the edge, it flows through, we get it for free. Hopefully, this helps drive some adoption and doesn’t make views so scary. All right, so let’s jump back to good old PowerPoint.
18:22
Speaker 2
So let’s talk about calcs and events. Most of the work that our engineering department has done in the last year is all around events. And we’ll get into why we did that to reach a few industries or verticals that we want to do more business in, where we needed the concept of batches. So our events before were rather rudimentary. We could capture a start, we could capture an end, but we couldn’t capture any stages or sub-steps, anything like that. So we’ve done a lot of work to overhaul our calc server to be able to capture those batch-style events that need to occur.
19:05
Speaker 1
Yeah, talking about calcs and events here, there are really two ways to think about it. When you are interacting with Axiom and the tags that exist there, you can run calculations on the fly just to learn from the data, try and troubleshoot that data at your fingertips. And the customisation is what Axiom provides. The calculations here on the Canary Administrator, the same tool that Ken was showing with Views. These are values that get written back into the historian and they’re stored as part of the historical record. Because while the raw time series data is valuable, you’re not necessarily going to have the context that you need that the calculations can provide. So there’s single tag transformations, multiple tag expressions, all sorts of different functions of calculations you can build, that is done on the administrator side.
19:53
Speaker 1
Also, the condition-based rules, really just learning more KPIs, the ability to build those out in mass on the CALC server. And here are some features that you can do with events. Process monitoring, fault detection. The key part that I’ve seen customers deal with. Ken has worked with more of our end users than I have. But the automated reporting that events can provide and that key information that you can send out through a notification, get it in the hands of the operators, the people that need that information. These reports can be generated easily and really just kind of use the Canary data. So it doesn’t just sit there in a data repository on a dusty database. No, it’s a living, breathing tool that engineers and operators are using on a daily basis.
20:47
Speaker 1
When it comes to the value events, you know, we’re talking about not just tracking those KPIs, but really automating the workflows. What we’re trying to do is free up people’s time because you don’t want to be scrolling through trying to find something that happened. You just want the information that’s applicable. Maybe you just want to see something that’s outside of the desired range. If, you have a look at the battery storage project that can pull up some of the tags from. Imagine scrolling through, just trying to find a needle in a haystack. That’s where events can be much more scalable, much more performant.
21:22
Speaker 2
Okay. And so I mentioned that your engineering team focused on Calc events for most of the year. Here’s. Here’s just some bullets of some things that we tried to accomplish or tackle. We’ll go into each one of these in a little more detail. So I’m not going to read the bullet list at this time. So while CALCS was actually one of our newer modules, we built it in 2019. There was a lot to try to capture on a single screen. And as I said in the opening at the Fireside chat, we had a customer who went absolutely crazy with calculations and literally had to find over a thousand calculations inside our system. Imagine a list of a thousand on a single status screen. What that looked like. We had no organisation at that time.
22:07
Speaker 2
And so they were coming up with all kinds of crazy naming conventions and adding spaces so that they had their nice little indentation. And so we felt really bad. But that was the hand we dealt them. So over the years, we’ve seen the adoption of calculations grow and the need to provide this organisation. So something that seems so minor, I don’t know why we didn’t do it. From the start, but we finally got there. So we do have the concept of folders now. You can do all kinds of organisation and grouping. You can have folders, subfolders, and as many layers as you need to go. And then at the folder level, you can now start doing all kinds of functions as well.
22:46
Speaker 2
So instead of thinking of jobs individually, you can now iterate and do things on known groups of jobs all at once based on the folder that they reside in. So not only was the status screen in need of an overhaul, but the actual detail screen has changed significantly, too. So if you take, you know, version 25.4 that just got released this week, and open up an existing calculation, you may go, Oh, what happened here? So we tried to really rethink based on workflow, rather than trying to put PAM panels all over the screen. So the screen’s a bit more dynamic, a lot more hiding, bringing fields to the foreground as you work through the definition process. All right, so I touched on multi-level events.
23:40
Speaker 2
So, you know, as I said before, I feel our events were really good at capturing that top level there, that order run. You know, we could tell when something started and we could tell when something stopped, but we were ignoring all the sub-steps that were in between there. So now, with multi-level events based on a view that you have created, you can create events to match that process. And so maybe you have a. You have an order, multiple batches are going to go into that order, but to fulfil that batch, of course, you need to maybe weigh your materials, your inputs, maybe you have to mix it, maybe you have to fill it, maybe you have to package it. However many steps and processes are in place, we can now capture all those phases.
24:27
Speaker 2
And just as you could have an email generated at the beginning or end of an event, you can do that with each stage as well. Something that weren’t good at before was what we’re now calling parallel events for batch processing. So that top-level order run, we could only have one of those in effect at a time. Obviously, that’s not reality. And so we had to do a lot of work under the covers to be able to support this kind of parallel, where multiple events can be in process at a single time as you work through the various stages or phases. And of course, how do you want to see this now? So people love our trending. Odd fact, we were actually a trending company before we were a historic company. So before Windows existed, we were doing trending in DOS.
25:25
Speaker 2
So those of you that are old enough to know what DOS is. I. I do, unfortunately. But what happened is were a trending company and were trying to acquire data from, you know, different systems, and people complained that our trending was too slow. And we’re like, well, they can’t give us the data fast enough. And so hence the Historian was born. We’re like, well, if we create our own database, then it’s all on us. And so trending came first, and then along came the Historian. So we’ve enhanced Trend Graph control, which is an extremely popular control that people use. Of course, you can kind of target it to edit mode now, and you can browse, choose the event type that you want to target.
26:08
Speaker 2
And of course, with an event, you’re dealing with assets and an asset template, and so you can choose the tags that you want to visualise on the screen. By default, it goes back and grabs, I don’t know, the last 10 batches or something like that. That’s all configurable. There are all kinds of new search capabilities around, which batches are going to be on the screen. And then we have highlighted here what we call pinned events. You guys who are actually in the industry probably call this a golden batch. So if you have an optimal process that you want to see always on your display, you can kind of, you know, pick those as your favourite or your pinned event. And then anytime that you come in and use the graph control, those pinned events are going to be shown on the screen.
26:53
Speaker 2
Those of you who are longtime Axiom users, you’ll notice that the time bar at the bottom is drastically different. It’s no longer based on date and time. It’s. It’s entirely relative. So out of all the batches that we are that you’ve filtered on and brought into this screen, the longest duration is going to control the time window. And then all the other batches are just going to fill in as much as they need. And this can be running in live mode. So your current batch or batches can all be trending and coming across the screen all at once as well. Excuse me. All right, so event properties. So we’ve had calc properties for quite some time that you could build out. You know, when you define these events, we’ve kind of expanded on that even more.
27:44
Speaker 2
So, you know, lots of times if you go into an alarm condition or an event condition, people want to see what my maximum pressure was over that, the event duration, or what my average temperature was over that time? Or maybe, you know, running totals, like, how much did we produce, what length of roll did we produce? And so those are still very applicable. Those event properties can be routed to your email templates. So when an event starts or stops, you can route any of those variables into the template in the email. And then, as I just talked about the trend graph, those can actually be used inside some of the filtering that you can do inside Axiom. Now, if you’re familiar with our calc server, you would know we probably had a list of wow, 45 functions that you could use within your expressions.
28:41
Speaker 2
As we built out and supported this multi-level eventing, we need to add even more functions. So you can now do things like active event count, completed event count over a time period, asset instance counts over a time period, things like that. In this multi-level event concept, you can also define a variable at your top-level event, and you can actually pass that variable value down through all the child events too. So maybe you have an order number that you’re capturing at the highest level, but that order number needs, you know, for traceability, needs to be on all those child events too. So we have the concept of passing from parent and passing from siblings. All right? And besides the UI in calcs that people beat us up on, they beat us up on backfilling.
29:33
Speaker 2
So backfilling is when, potentially, we get the calc server just kept running and were computing these values. Maybe a site was down and data wasn’t actually being transmitted to Canary at the time. And so that concept of redoing or recomputing a time period is what we call backfilling. If I had an asset-based calculation that had maybe 1,000 instances of something, and I needed to backfill, unfortunately, we had to backfill all thousand instances. And so that wasn’t very performant or very efficient. So we’ve added in the granularity that now when you backfill, you can actually pick specific instances. So if only one site is down, only backfill that one site; I don’t have to touch the other 999. And we deal with customers in all kinds of industries.
30:31
Speaker 2
And so we get asked to add in very specific computations that only apply to a subset of our customers. We try not to build things into our software that only apply to a single vertical. So a very common one is, you know, kind of boiler performance, if you’re familiar with that. I’m not a physicist, I don’t know what these terms are, I have no clue what they do. But what we decided to do is make our CALC server, which we call the plug interface, so you can actually develop your own. Net assembly, implement our interfaces, build in whatever crazy calculations, customised calculations you need to do, and drop those onto the server. Suddenly, our Calc system wakes up, and you will see your function names inside our available functions. So the first one that we did was what we call Steam Calcs.
31:27
Speaker 2
We needed this to check some boxes on RFPs for customers who were looking at us. And so we’re anxious to see where this goes and what type of custom cool calculations people are really going to come up with.
31:44
Speaker 1
And I will say, the way that a lot of those feature requests come in, we have the Canary Help center where users can go in, submit a feature request, everything is reviewed and we take it into consideration. Much like this morning, when we talked about some of the custom collectors and integrations we have with other systems, when the use case warrants it. We certainly add that to the roadmap. We may be a little different from some companies in the sense that we don’t have a 10-year roadmap because who knows what’s going to be coming in the future. But we do take those feature requests into consideration. That Steamcal is a great example.
32:18
Speaker 2
All right. So, outside of views and calculations, we actually did some other work. So as Kyle talked about earlier, we are seeing larger and larger systems, we’re seeing more distributed systems. Sometimes, troubleshooting and root cause analysis can be really tricky because you’re bouncing from machine to machine, trying to look for handoffs of data and interaction between all of our services. And so for these larger deployments, we’ve come up with a way that we can actually store our message log in a single SQL Server database, assuming that all those machines have access to it. Right now, we have a message log per machine. And so that gets a little tricky when you have to start bouncing around. So now you can actually see a massive SQL database, all the messages from all the machines routing to that.
33:13
Speaker 2
On top of our View service, we have what we call our publisher service. So our View service is our API layer as well. So if you do make a request for data, as we talked about, that is going to the Vue service. Most requests for data we consider like batch requests, you know, it’s made on demand. The publishing service allows us to stream data out of Canary in real time. And so we’ve had that service in place for a while. We’ve been able to stream MQTT, whether it’s JSON or Sparkplug B. We could also stream JSON to a WebSocket connection. And now we’ve added Kafka as another destination. We’ve expanded some of our API functions. These actually came about because of some of the work that we had to do in calculations, especially around backfilling.
34:02
Speaker 2
So, historically, our Write API and our Read API Write were really just writing data. And of course the read is reading that back out. We didn’t really have functions built into our APIs. So the delete range and the obsolete tag options are now some of the new functions that we’ve exposed in our API. And I expect over time that we’re going to expose more and more of those so that you can do configuration of Canary actually outside of our administrator at some point. History validation may not seem like a big thing to most people. It’s something that actually runs in the background. Some people don’t even realise it’s there or what it’s doing. But there is, you know, since we are file-based, there are instances where power interruptions, things like that, can cause validation errors within the Historian files.
34:56
Speaker 2
Some of our large customers that have hundreds of data sets, this process really kicks off around midnight each night. And their process was taking entirely too long because of how we were doing it. So something small and maybe inconsequential to some people, but it was something that we did to really increase efficiency on the back end. And of course, we’re seeing larger and larger data sets, data scaling at a tremendous rate. And so we’re always doing optimisations in the Historian, and in our transport from our collectors to the Historian, which is what we call store and forward again. We’re seeing scale that I never even thought of 10 years ago. It’s crazy how fast the systems are growing. We had a customer just recently who had a SCADA system. Again, it’s a closed SCADA system.
35:54
Speaker 2
We couldn’t get the data from them, but they realised they could generate 10,000 CSV files every minute, and that’s what they were dumping onto our file system. And so just trying to accommodate things like that, we’ve done all kinds of work in Store and Forward. Store and Forward has a bit of intelligence built into it now, rather than just being a straight pass-through. It recognises when maybe throughput isn’t as optimal as we expect. And it can actually dynamically start changing packet sizes and things like that, and determine whether it needs to be in buffering mode or pass-through mode. So we build a lot of new features, you know, under the covers that a user’s probably not going to recognise. But we’re doing things to ensure, you know, on-time data delivery.
36:47
Speaker 2
We talked a little bit about Axiom with the enhancements for the trend graph. Another ask that we’ve had for many years is for those of you who are familiar with Axiom. Of course a user can save charts and applications in their private folder. But then we had Public, and we had read only, and Public was the wild west. You could do whatever you wanted in there. And so if it’s in there, anyone could see it and people really wanted more granular control over and have permissions within that folder. So within our identity tile now, you can actually go in, you can start doing folder-level security inside Axiom. You can do all the way down to the chart or the applications, just like you can for tag security. It’s going to follow the same model.
37:35
Speaker 2
Those of you who have used limits inside Axiom, you know, all of our widgets that have, you know, a limit concept. I personally never liked how we did that. So we overhauled it. You know, again, a quality of life thing. We’re hoping that it’s a better experience for you as you define those limits on your controls. Continued work around trying to scale the possibilities of report generation during the fireside. One of the metrics on large customers is that we actually have a customer in Canada that every morning, about 7am, they kick off 200 automated reports all at once. So that’s quite a drain on the system. It’s a lot of ask all at once. And so we’ve done work to try to make that more reliable, allowing more jobs to run concurrently, things like that. We did some authentication work inside Axiom.
38:36
Speaker 2
We call it reverse proxying. Previously, if you had Axiom off on another machine, you had to open a port so it could come back to our identity service. We actually do all that under the covers inside the Axiom ServiceNow. So it eliminated the need for an additional port to be open for us in diagnostic tags. I’ll kind of talk about this in the roadmap, too. We’re expanding the tags that we’re logging for each service. So right now they each have obviously memory, CPU kind of high level things. We’re really reevaluating each service, expanding out the diagnostic tags so that we can better troubleshoot and help our partners and end users.
39:20
Speaker 1
Jumps into the roadmap. Just wanted to touch briefly on, you know, it’s not just the highly technical stuff and kind of under-the-hood changes that have been made with all these changes, as you can imagine, there was A need to overhaul our training components. So we have the Canary Academy, which is our equivalent of Inductive University. And there is an entire series of new videos and features that have gone into it for our current customers in the room. You may have interacted with Steve Mason in the past. He was leading up our support team. He’s now focused solely on training and doing some of these new modules and things. So for those current Canary users who have maybe gone through the Canary Academy in the past, I’d encourage you to check it out. There’s a lot of new content in there.
40:04
Speaker 2
Okay, so what’s coming next? So those of you who have attended many years here, you’ve heard us talk about Linux or Linux as you guys like to say it. And so while the whole system, in theory, actually can run there now, we don’t have a good means to administer it yet. So we’ve taken a more modular approach. So we are releasing the MQTT or UA collector, and of course, those need the store and forward service. So those are going to be available in a short amount of time. But engineering’s finally kind of signed off on them and are, and we’re ready to dive into a brand new OS for us. Obviously, we’re working on the Ignition 8.3 module. They blew us up, and that’s okay. We’re scrambling to, you know, make the appropriate updates and get that thing built as quickly as we can.
41:00
Speaker 2
No, I don’t have an ETA as fast as possible is our answer. So we understand that, you know, those that were here last year, I talked a lot about version 24, our version 24 and how we re-architected a ton of things, and we blew up other vendors because we changed interfaces as well. So I guess it was kind of karma or payback that were due. Yeah, so I already talked about diagnostic tags, views, and asset template extension. So, you know, in my quick little demo that I did, I showed you all the tags that were defined or that we discovered for a Rack. We understand that not all parts of a template necessarily are going to come from a historian tag, and that’s what our requirements are today.
41:48
Speaker 2
And so we are going to have extensibility in there where you can add your own attributes. Those attributes could be just a static value, or potentially, they are going to be looked up from a relational database. So we’re already doing some work inside our identity service to kind of have named relational connections, and then those named connections can be used throughout the system. And one of those places will be in asset templating. And I think I just quickly mentioned that we did release 25.4. I think it was Monday or Tuesday of this week. And so that was a big lift for our engineering department. So now we’re on to bigger and better things.