While much has been made about how AI can bolster physical security, concerns remain over privacy and data safety. As the AI boom churns on, the world will most likely face a new era of tech regulations and policies enacted to safeguard the public from AI tech that could be misused by bad actors.
It’s also the time for private companies to reckon with how they can best make use of this ever-evolving technology. Security Sales & Integration recently reported on new guidelines for how companies can best use AI from professional security solutions provider i-PRO CO., Ltd.
“Recognizing the profound impact of AI on society, i-PRO has always placed paramount importance on fostering an environment of responsible and ethical AI usage,” reports Security Sales & Integration of the i-PRO announcement.
The tech company released its Ethical Tech Principles for AI as a guiding light for itself but also something of a blueprint for industry peers as AI continues to be integrated seamlessly in day-to-day practices.
Here’s a look at i-PRO’s recommendations:
Making society safer and more secure: While it’s important that new and ever more advanced AI tools are developed, they have to be created with their impact on society — and safety — in mind. After a new AI product is released, i-Pro writes that “we continue to evaluate their impact on our customers’ lifestyle, society, and the environment, and continue to reflect the results of such evaluations in our products and services.”
Enshrine human dignity: Throughout history, the tech innovations that flourished never lost sight of the humanity of those who used it. Think of car safety protocols for automobiles. For i-PRO, “AI is based on the premise that people are central, and that AI should be used, developed, and deployed, to expand human capabilities and promote the pursuit of happiness.”
Be transparent: Transparency is always central when it comes to rolling out new tech. The company writes that any new AI product has to “strive to eliminate discrimination and unfair influences.” This means aspects of one’s identity like gender, religion, or race should always be considered, respected, and protected. The company said they will “consider social justice” when using and developing AI tools, providing “appropriate information to our customers” about how this information will be used.
Privacy is key: AI tech being used for physical security must focus on privacy. This means having top-of-line security measures in place that are in line with legal regulations, company policies, and make sure an individual's personal data is secure.
Train the team: Everyone on a team must be trained. This involves provider, operator, and customer. This can apply to a number of fields but is especially important for individuals in the physical security sector. If the entire team — not just managers — isn’t well versed in how to operate AI-driven tech and how it integrates with existing tools, there is no way that a company’s security will be ensured.
Cooperation and collaboration: There are so many stakeholders involved when it comes to security and any time a new AI tool is implemented, it means that conversations must be had where those invested in a company’s success have a say about how tech is used. If there is feedback or concerns about a new AI program or device, listen to the stakeholders and consider their advice.
“As the physical security industry continues to embrace the promise of AI, we look forward to working together with our industry colleagues, partners, and customers to foster a culture of responsible AI development and usage,” Masato Nakao, CEO at i-PRO, said in the company announcement, as reported by Security Sales & Integration.
It’s a brave new world of AI technology, and for physical security professionals, safeguards must be put in place to use it ethically and safely.