As an increasingly sophisticated technology, AI can support and bring about a safer community. Crime prevention, safety and security with the help of Artificial intelligence (AI) is no longer a far fetched idea.
AI can increase operational effectiveness through automation and augmentation with a collection of technologies that simulate human traits such as problem solving, perception and planning. They complement human expertise and produce quicker and better outcomes when used together.
While AI can spot patterns that are invisible to the naked eye, humans can use intuition and experience to contextualize data insights and decision-making. The process of training makes AI distinguish between “good” and “bad” behaviour. This method of training is said to be remarkably similar to that of teaching a human intelligence or in other words a child.
“By collecting a set of examples that we label as “good” and “bad”, both child and AI begin by seeing an example and making a random guess at the type of behaviour they are observing. When they make a mistake, they get feedback about the error and update their internal models so that they are less likely to make similar mistakes in future examples.” said Sohrob Kazerounian, senior data scientist, Vectra AI.
He adds, “What is critical, however, and indeed makes the task of training somewhat of an art in both child and AI is the fact that we need the examples used in training to be representative of the types of examples they are likely to see in real-world scenarios. If we carelessly select our training examples, we may end up sending a child and AI into the world with biased and potentially catastrophic models of the underlying behaviours they were intended to learn to identify”
With training the machines to be better humans, AI is expected to be working on its own with zero-intervention. The two fundamental trends in the market are ‘zero interfaces’ ad ‘zero decisions’. “While AI systems can monitor things that are far beyond the scope of human capability, their ability to respond to those detections is far more limited than a human operator. AI systems are simply not yet intelligent enough to reason about the types of options they might have in responding to an ongoing threat, much less assessing the full spectrum of technological and financial consequences of any such actions,” said Kazerounian.
AI has also given rise to autonomous systems that can tackle more difficult tasks in a broader variety of settings, thanks to the convergence of machine learning and robotics. Although sensors can provide data to systems, AI helps process and make sense of that data, as well as suggest specific actions. With this learning process, AI is gaining prominence in the safety and security industry across the globe.
“The real disadvantage of AI is its unpredictability,” said Dr Jassim Haji, President of Artificial Intelligence Society. Even if we know the system’s terminal goals, AI predictability is best described as our inability to precisely and reliably predict what particular actions an intelligent system will take to achieve its objectives.
“However, there are other disadvantages, such AI ethical issues concerning algorithms biased during design plus data privacy especially concerning technologies such as Face-Recognition and usage of personal data for detection and patterns analysis,” he elaborated.
AI has already been incorporated into several houses, border protection and homeland security applications in Singapore. For gathering, sorting, and analyzing data to better inform human decision-making, AI-driven perception, processing, and analysis are crucial. But with the increased implementation, there needs to be proper governance to ensure the proper functioning of the systems.
Singapore’s AI governance model is a ground-breaking endeavour in Asia for ethical AI and is gaining international acclaim. Dubai’s “Principles of Artificial Intelligence” are more high-level than some of the other governance structures, it sets out a set of guidelines for maintaining the protection of AI-driven systems. It states that AI systems should not be able to harm, kill, or mislead humans on their own and that the protection and security of people, whether operators, end-users or other parties, will be of paramount concern in the design of any AI system.
With these governance principles, the security concerns of AI implementation in the field of safety and security industry is under the view of a microscope. There are also other major issues in regards to implementing AI for safety and security.
“Security solutions that use artificial intelligence have to be re-trained at regular intervals as they have to keep up with evolving security threats. Moreover, cybercriminals can use AI software too. They often disguise their attack or pollute the sample so that their attack appears to be benign,” said Aharsh MS, CMO at Accubits. We would be better at mitigating AI’s risks when we gain a better understanding of it. The future we are used to seeing in sci-fi movies are no longer pipe-dreams.