- 26:20 min
Introduction
Markus Oosthuizen will take you on an interactive journey through the possibilities of Ignition, showing live, practical applications that spark new ideas and unlock fresh opportunities for your operations. This isn’t just a demo. It’s a hands-on exploration designed to inspire, challenge, and equip you with the tools to transform the way you work.
SPEAKERS:
TRANSCRIPT
00:10
Speaker 1
All right, so we have Markus up next. Markus is our Application Engineer. So the brief to Markus was actually quite simple. We’ve got all of these amazing new features in 8.3, and I think a lot of people are excited. It’s about the new designer capability. That’s kind of where they started off, but there’s so much more to it. And the brief was really simple to do something different. So, Markus, you wow us. Thank you. Hi, everyone. Can you hear me? Yes. Cool. Okay, so today’s demo is going to be pretty unique. I hope so. For today, we’ll be doing a few practical innovations and implementations using Ignition 8.3. Some of it is, I hope, unique for you.
01:08
Speaker 1
Others is basically just a quick way to show you how we can use the new tools in 8.3 to do something pretty cool. So let’s start. So, first up, we’ve got a few basic ones that will do event streams to log a ticket for us, and then we’ll use the new forms component to subscribe to an alarm notification. After that, we’ll discuss Computer Vision quickly. We’ll just go through what Computer Vision is, the challenges we face in our SCADA systems for them, and. And then the possible edge architecture, which is the one we’ll be using today as well. Then we’ll see how we can integrate Computer Vision with Ignition. We’ll look at data pipelines using event streams and the formatter filling, and then a few fun examples. So first up, let me just make sure the PLC is running before I do anything.
02:16
Speaker 1
So first up, what we’ve got is a water pump station. At the moment, it’s just emptying. So what we’ll do is, you’ll see, we’ve got a maintenance ticket system. This is from the Ignition Exchange. It is a free resource you can download and implement. I think it’s called Ticket Now, Track Now, something like that. I can’t remember, but you can download it. It’s a quick way to. To. To lock tickets. This can be anything in your plant or your client’s plant. It can be SAP, wherever you do your work. Orders from MES can be anything. So what we’ll be doing is we’ll be failing this valve here. When we fail this valve, I’m going to show you our event stream. What we expect to happen is that the valve itself will go into this event stream.
03:14
Speaker 1
The moment the valve has failed more than three times, we expect the event stream to log and create the ticket for us. Okay. Telling us, listen, this valve has failed more than three times in the past 24 hours, and you should go and investigate. Okay, so let’s hope it does that. So let’s fail our valve. So there the valve fails. We can now go acknowledge. When we go to our event stream, you can see the event stream triggered once it is allowed through the filter. All we do here is check if the alarm is active. We don’t do anything in the transform.
04:03
Speaker 1
What I do is just add some metadata to it, tell it which shift we’re currently in, some date timestamps and then in our encoders we write it to a database and in the script is where we check how many times it’s failed. So we can go back, we can now fail it another time it comes up, this will be two, and then we can fail it a third time. So this is three. Okay, so at the moment, you can see it came through three times. After this just go back, there’s no tickets. All right, so when we go here, I fail it one last time. This is now four can acknowledge, not acknowledge. As we can see in the event stream, it is now four times that it came through. So as you can see, created the ticket.
05:13
Speaker 1
You can see it was created by the system. From here, you can allocate it to any engineer. The engineer can then go investigate the valve and see which shift it failed on, and update the ticket for you. It can then be moved to in-progress validation as it goes on. And that is just a quick implementation to show you event streams. Next, we’ll be looking at the forms component. So, what I did was create a form on the form. The form is actually very cool. As soon as you guys start playing with it. I don’t know who has and who hasn’t, but it’s got an auto-fill property, which your browser will determine if you can auto-fill or not. But I did activate mine, so mine is quite easy. I selected email.
06:04
Speaker 1
So, how this will work is you will basically subscribe to an event. So what I’m going to do is on this valve, I’m going to subscribe to the same thing, an alarm event. And I want to know when the state of the alarm changes, meaning when the alarm goes off, I want to get an email, and I only want to do that for the next. What’s the date? Let’s say 24 hours. Okay. This may be useful when technicians just fix the valve; they just want to know for the next 24 hours, is the valve actually still working? Just to have some Visibility. You can also subscribe to a level, let’s say for a weekend. You’ve got maybe water restrictions. You want to subscribe to your reservoir levels for the weekend to make sure the levels are where they should be.
06:55
Speaker 1
You can make it hourly or daily. Okay, let me just open our PaperCut app. Okay, so what we’ll do is we’ll subscribe, and you’ll see success. Let me just show you. Here’s our email telling us. Hi Marcus. Your alert notification for that valve CDA is scheduled to alert on state change intervals. So once again, we can now fail the valve. The valve fails. The new email comes through telling you this valve has failed. The previous state is false. The current state is true. Now you know your valve has failed. This can be Telegram, this can be WhatsApp. Doesn’t have to be emails, can be SMS’s, it can be whatever you want it to be. The nice thing about this is that, especially for management, they like to have visibility. Engineers don’t like it when management comes to you for visibility.
07:57
Speaker 1
It’s nice to see them for this. It’s when they not on the plant. You can also get notifications, and this is just the basic implementation. Okay, so let’s go back now. Computer vision. Who here knows what computer vision is? I’m sure some do. Okay, that’s good at least. Okay, so just quickly. Computer vision is a field of artificial intelligence that trains computers to interpret and understand visual information from images and videos. Similar to human sight, it usually uses machine learning, more often deep learning, to process, analyse and extract meaningful data to perform tasks like object recognition patterns, pattern detection, and scene understanding. So examples for this are our phones, self-driving cars, when you’re logging into your phone, quality control, defects, manufacturing, the list goes on. The challenges we face in computer vision at the moment are expensive 10 to 1.
09:01
Speaker 1
You need graphical processing units, which are extremely expensive. It’s resource-intensive. You either subscribe to LLM models, which can also be a costly exercise. Like I said, it’s power-hungry devices which can be a risk to your production if it’s on the same server. Usually, the people implementing computer vision models like to keep the IP enclosed, which is more than fair. It’s just difficult to integrate with those models and the data that they expose. If you want to start pulling more data or graphs from whatever the models are returning, you usually need them to integrate it for you. Okay, scalability barriers, this is probably one. If you want it accurate, you probably won’t move away from. Using a public model is not something I would advise, it’s a good way to get started.
09:59
Speaker 1
But you do need to, however, probably train models for your specific plant if it’s going to run in a production environment. Right. If you look at PPE detection models, you would want it to train. If you want a food and beverage plant on your PPE, you can’t be using a construction site PPE detection model for a food and beverage plant. So this, but like I said, this is something, if this is something you guys are interested in, please come talk to me. There are a lot of people out there who do this, who you can work with to get this done. Okay. And then reliability, you know, if it’s a network, if you’re subscribing to an API, if there are network failures, and it’s in the cloud, there could be some downtime. So this is our architecture for today.
10:50
Speaker 1
We’ve got our Ignition server, basic PLC, and then this is what we’ll be running. As you can see, this is a Raspberry PI. This was my home server. It is not anymore. I’ve got an API service running on it, which contains all the functions required for our models, such as processImage. In the process image, I give it arguments like run object detection. I give it the image data, and then it returns it for me. I send the data through an HTTP request, process it, and it sends the data back to me. Okay, so on this, if you are implementing this in a production environment, having Ignition Edge is probably a better thing to have for the reason that the built-in Store & Forward capabilities back to your Ignition server that you don’t need to handle if there’s a connection drop. Okay, data pipelines.
11:52
Speaker 1
A lot of, I think most of the people here are SIs. You should all know data pipelines, even if you don’t know the actual term. It’s basically just the flow that your software, any software, follows from input to output. So if you look at this is a data pipeline for a computer vision model that does license plate recognition. Now, if you think about license plate recognition, first you’ve got your camera input, it will decode, you know, look at it. Is it a vehicle, is it not? There’s no reason in doing license plate detection. If there’s no vehicle, then it will check. If it’s a new vehicle, it does a license plate detection check. If the quality is good, it does your license plate recognition.
12:40
Speaker 1
So what we’ll be doing now, hopefully, if this works, is we’ll be replacing some of these functions inside the pipeline with event Streams in Ignition and the ones we’ll be replacing. Sorry, on this side, everything is in red. So if you look, everything that’s not in red is basically your AI model. The reason why we do this is that we now run our AI models at the edge, and we do all our logical processing in Ignition. The nice thing about this is that we can easily change the way we choose to integrate into this model. We can add more edge models with different models running on them. And also, if it does go down, our Ignition server is safe and not affected by it at all. So let’s quickly just see the connection dropped.
13:39
Speaker 1
I’m sure Jaco explained to everyone that you guys would not have seen this AI presentation at all today. The Raspberry PI was not about to boot up at all. Okay. But it is now. Okay, so let’s look how we do it. As you can see, this is making an addition. Reason for that is, like I said, it’s my home server. Okay, so what we’ve got is we’ve got, well, for what you can see, six cars. The AR model will pick up a few more. So we’ve got six models. Let me show you our event streams for today on this one. So it might be a bit small. Okay, so we’ve got a plate detection model which is enabled. We’ve got a plate recognition model, which we will disable. We’ve got a super-resolution model which will disable. And we’ve got a vehicle detection model.
14:44
Speaker 1
Sorry, this one will enable plate detection will disable. Okay, so we’ve got four event streams, technically pipelines. So what we’ll do is when we run this model, now these three event streams are disabled. Only vehicle detection is enabled. So we expect it to only return vehicle detections for us. So when we run, there are the vehicle detections. Okay, so as you can see at the bottom, this is JSON data. So it’s easy for you to work with it. So it returns the visual format and the actual JSON data as well. Of what it returned, the confidence in how confident it is as a vehicle. As you can see, this middle picture here didn’t pick up the car. That’s because the model I’m using is a public model. It’s not meant for production use at all. So the next model we’ll activate is the plate detection model.
15:44
Speaker 1
So after this, what we expect to do is to crop the license plate and return it to us here. Because of the middle picture, it’s not picking up the vehicle. It won’t pick up the license plate either because it didn’t pick up a vehicle at all. Okay, so clear and then. Sorry, the network is extremely slow. A volt save. So now when we run speaking up, that picks up the license plates. As you can see, there they are once again. So they had actually picked up a car back there and there, but it couldn’t pick up the license plate. So we can clear this again. Now, what we’ll do is we’ll activate plate recognition. Now, plate recognition is basically an OCR model. It’s optical character recognition. There’s a lot of things you can use that for.
16:45
Speaker 1
So what we’ll do is just make sure it’s saved. We’ll run it again. Runs. So there. That one is pretty accurate. It’s pretty accurate. It’s okay. These two not accurate at all. So what we can do now is activate another event stream called Super Resolution. So what it would do is if it sees that no license plate recognition text was returned, we’ll send it through the Super Resolution stream. Basically, it enhances the image for us, then sends it back to the plate recognition stream. So save. Run it again. The network’s slow enough that you actually might see a change. Okay, so there. Okay, so this one is okay. This one got perfect now, and the rest are still the same except for that one, which is also a bit different. So yeah, this is a very unique way of using event streams.
17:51
Speaker 1
I’m not saying this is what you should use it for. The entire like, Jaco said, goal of today was to show you the different ways of using the different features. At the very least, I hope this inspires you or sparks an idea for anything you use at your site. So quick. Okay, now everyone’s going to see my ID photo. Okay, so this is basically what we’ll do here: it’s going to take my ID, it’s going to crop my face, look at the date on my ID and fill in the form for me. Okay, so we’ll open those crops. Sometimes you have to click. So there it filled in the entire form for me. Just hashtagged out my ID number for safety. That’s not going to help you at all. Anyway, okay, so now I’m uploading a different picture.
18:59
Speaker 1
What we can do here is also run face validation. Run the validation. It’s saying it’s a match. It then enables the submit button and you. This is cool. If you’ve got a driver who maybe scans the driver’s license, you want to make sure the person on the driver’s license is the same person driving the car. Another way could be security access to certain parts of a plant. They usually have company IDs. You can then scan the company ID, scan their face, and make sure it’s the same person getting access to certain restricted areas. Okay. Like I said, AI, especially Computer Vision, is not something I expect everyone to go do tomorrow. It’s just to know that there is a possibility. The way I did it is not even the only way. I’m not saying it’s the correct way.
19:50
Speaker 1
It’s just a way to do it to show you that it’s possible to add it to any current running Ignition server, even if it’s as a poc. If you want to trial it, we’ll work with you, whatever we can do to make it work. So this is just a few different models that we just tried to see if we can basically run them inside Ignition. We’ve got an object detection model. Here’s a picture of a bus, we’re gonna run it. I don’t know how good you guys can see that, but there’s some lines around them. Then we can do PPE detection, which is probably a lot more applicable for everyone here. So when we run this, it picks up the PPE, you can then say, let’s say mask is not really a lot more applicable. So let’s run it again.
20:43
Speaker 1
We exclude the mask and then picks it up. You can also say, Okay, I only want to see hats or hard hats. And then it will only return the hard hats, whatever is important in that specific area of the plant. Now, what’s nice about this is that these custom functions you can define and change as you want. Which is the nice part with where people implement computer vision models as silos. If you wanted to change things like this, it is extremely difficult for you to do it yourself. You’ll have to get them back. For you to pull data from any model is extremely difficult for you to do yourself. Even though you want to do it in Ignition, you’ll probably get someone to come do it for you. Then I’m going to try the video demo. What it’s doing is it’s streaming a video.
21:40
Speaker 1
So the network, like I said, is really slow. So I’m going to try and. Let’s see. Maybe we’re lucky. Maybe it’s quick. It’s not too bad. So here it just picks up a few people working, looks at the PPE. So I’m just going to go through a few. I see this model. Yeah. So this is just to show you how far it can pick up. Like I said, this is not any model. If you want this, you’ll have to get these results right. But it’s to show you that it’s possible. And that’s what I really hope to inspire today, is that even though we’ve never seen it, we’ve never done it, I think you guys are probably, you guys are the smartest, right?
22:36
Speaker 1
So I think you guys can definitely do similar things, especially mining, detecting heavy vehicle machinery, people, walkways, where they walk, are they walking even just to get safety compliance numbers, you know, it’s not a name and shame game, even though sometimes you want to know who’s the person, but getting an actual safety compliance number per plant for seeing where they walking, you know, are people really PPE compliant? Gives them actual visibility around. Are they really performing on safety as they would want? So this one, let’s see. Yeah, so it picks up license plates, it returns your JSON data here, and you can process and do with that whatever you want. And then the last one, so we can try the live one, which is from my phone, but yes, I’m not gonna. But it would be, I guess, for everyone, the best.
23:48
Speaker 1
Okay, so let’s stop it. Yeah, it’s still very slow. Just want to check. So like I said, with my phone, it’s actually live streaming from my phone’s IP, which is what I’m typing in now. But it’s actually following a pretty long path to get there. So let’s see if it works. So we’ll click Live Demo. Just do this. I don’t know which model you guys want. Let’s take object detection first. Yeah, no, it’s very slow. Wait, no, no, it’s not happening. Yeah, no, sorry, the live one is not happening for you guys. If it works in the other ones, we’ll send you a video. So yeah, these are just a few examples of how we implement AI. I hope you enjoyed it. I hope you’ve got questions. I really do hope so.
25:31
Speaker 1
Not difficult ones, but anything you have, I’m prepared to answer. Even if it’s afterwards. If you guys want to sit at any time, even if it’s for the basic implementations, I would love to sit. We are more than prepared to share all the code, logic, and everything with you guys. It’s not our IP. I don’t care. If you guys want to use it, please use it. Use it in your house. I’m going to use it for my cameras if I return the PI at some point. So yeah, that’s it. Thank you.