• Home
  • Blog
  • Facebook
  • LinkedIn
  • Twitter
Menu

Peter Cavicchia

Street Address
City, State, Zip
Phone Number

Your Custom Text Here

Peter Cavicchia

  • Home
  • Blog
  • Facebook
  • LinkedIn
  • Twitter

Physical Security Plans That Mix the Human with the Hi-Tech Are Most Effective

October 1, 2024 Pete Cavicchia

While much has been made about the rise of artificial intelligence (AI), the world currently isn’t in an era where human expertise is being replaced by machines. It’s quite the opposite. Instead, the current reality is one in which machine learning and modern technology is a tool that augments human input.

In a new article for Security Magazine, Tomy Han writes about how the best physical security approaches involve human know-how combined with modern technology. This ranges from traditional, common sense approaches like effective tech training and security cameras to partnering human security experts with AI tech and advanced tools like drones.

Here’s an outline of some of the key contemporary technologies that today’s physical security experts and staff have at their disposal:

  • Guards with body cams: Citing a tool used by modern law enforcement, Han describes high-definition body cameras worn by security guards as an important tool that should be implemented on any business’s campus, at a retail store, or a school. Beyond leveraging data and footage captured by advanced video and audio recordings, there are other benefits. For example, the presence of a body camera visibly worn by a guard might deter potential bad actors. These cameras can also hold said guards accountable, since they that data can be reviewed if there are any accusations of potential transgressions on the job.

  • AI surveillance systems: Surveillance systems that are powered by AI that can leverage and assess vast quantities of security data in real time is the name of the game. If there is a big sporting or political event, a security team will want to have this kind of security tech. “These systems apply image recognition and computer vision to video feeds from stationary cameras…With these capabilities, lean teams can focus on the highest risks and threats while leaving routine monitoring to technology,” Han writes.

  • Rise of the drones: It might have sounded like science fiction 20 years ago, but now technology like drones and robots are common security features that can be harnessed by human staff. Drones can give security staff remote views of hard-to-see points of entry and access. Thermal imaging cameras that are embedded in many drones can allow a security team to remotely assess any potential threats.

  • Remote access control: Cloud-connected systems have made way for remote monitoring and access control of physical infrastructure systems that can be used by security staff who are stationed anywhere. This offers more convenience and improved vigilance, especially during off hours when a building or campus might be largely empty and left vulnerable.

These are just a few of the modern tech tools at a security staff’s disposal. All of them require human input and expertise. Human capabilities aren’t being replaced — instead, they’re being enhanced.

For more examples, check out Han’s full article.

Tags Tomy Han, AI

Company Reveals Guidance for How to Use AI Responsibly in Physical Security

July 19, 2024 Pete Cavicchia

While much has been made about how AI can bolster physical security, concerns remain over privacy and data safety. As the AI boom churns on, the world will most likely face a new era of tech regulations and policies enacted to safeguard the public from AI tech that could be misused by bad actors. 

It’s also the time for private companies to reckon with how they can best make use of this ever-evolving technology. Security Sales & Integration recently reported on new guidelines for how companies can best use AI from professional security solutions provider i-PRO CO., Ltd. 

“Recognizing the profound impact of AI on society, i-PRO has always placed paramount importance on fostering an environment of responsible and ethical AI usage,” reports Security Sales & Integration of the i-PRO announcement. 

The tech company released its Ethical Tech Principles for AI as a guiding light for itself but also something of a blueprint for industry peers as AI continues to be integrated seamlessly in day-to-day practices. 

Here’s a look at i-PRO’s recommendations: 

  • Making society safer and more secure: While it’s important that new and ever more advanced AI tools are developed, they have to be created with their impact on society — and safety — in mind. After a new AI product is released, i-Pro writes that “we continue to evaluate their impact on our customers’ lifestyle, society, and the environment, and continue to reflect the results of such evaluations in our products and services.” 

  • Enshrine human dignity: Throughout history, the tech innovations that flourished never lost sight of the humanity of those who used it. Think of car safety protocols for automobiles. For i-PRO, “AI is based on the premise that people are central, and that AI should be used, developed, and deployed, to expand human capabilities and promote the pursuit of happiness.” 

  • Be transparent: Transparency is always central when it comes to rolling out new tech. The company writes that any new AI product has to “strive to eliminate discrimination and unfair influences.” This means aspects of one’s identity like gender, religion, or race should always be considered, respected, and protected. The company said they will “consider social justice” when using and developing AI tools, providing “appropriate information to our customers” about how this information will be used. 

  • Privacy is key: AI tech being used for physical security must focus on privacy. This means having top-of-line security measures in place that are in line with legal regulations, company policies, and make sure an individual's personal data is secure. 

  • Train the team: Everyone on a team must be trained. This involves provider, operator, and customer. This can apply to a number of fields but is especially important for individuals in the physical security sector. If the entire team — not just managers — isn’t well versed in how to operate AI-driven tech and how it integrates with existing tools, there is no way that a company’s security will be ensured. 

  • Cooperation and collaboration: There are so many stakeholders involved when it comes to security and any time a new AI tool is implemented, it means that conversations must be had where those invested in a company’s success have a say about how tech is used. If there is feedback or concerns about a new AI program or device, listen to the stakeholders and consider their advice. 

“As the physical security industry continues to embrace the promise of AI, we look forward to working together with our industry colleagues, partners, and customers to foster a culture of responsible AI development and usage,” Masato Nakao, CEO at i-PRO, said in the company announcement, as reported by Security Sales & Integration.

It’s a brave new world of AI technology, and for physical security professionals, safeguards must be put in place to use it ethically and safely. 

Tags AI, i-PRO

How the Physical Security Industry Will Harness the Current AI Wave

September 5, 2023 Pete Cavicchia

Debates and discussions around the role artificial intelligence (AI) technology will play in society at large have been everywhere — from Hollywood to the political arena. In day-to-day life, AI has played a big role in Google’s changing algorithm and through Apple products by way of virtual assistants like Siri.

Read more
Tags AI, Fagan Wasanni Technologies

A Look at 5 2022 Security Trends

February 7, 2022 Pete Cavicchia

We’ve ended the first month of the new year and are into February. With the year still young, it’s a good time to look ahead at some of the trends that might be shaping the next 11 months of security.

U.K.-based security firm Calipsa recently outlined some of the key trends in physical security for this year. Of course, in our 21st century reality, the physical and digital are often combined. Artificial intelligence (AI) and IoT fuel most of the technology we use. When it comes to securing your business or your family home, Brian Baker, Calipsa’s Chief Revenue Officer outlines some of the big security trends you should keep in mind.

Here’s an overview of the five key trends from Baker’s post:

•   AI is the key

AI always seems to be at the top of the list. It’s embedded in almost all connected tech — from your phone to your building’s security cameras. Baker writes that AI in the security market was valued at $5.08 billion in 2020 and that number will hit as high as $14.18 billion by the year 2026. Baker says security-centric AI has shifted from mainly a “forensic analysis tool” that’s applied after an incident occurs to a preventive tool before your home or business faces a criminal breach. A big part of this is predictive data analytics, with machine learning using these predictive tools to make statistical decisions using data collected in real time.

•   The cloud is an indispensable tool

With work-from-home and a global economy meaning security interests extend worldwide, Baker spotlights the fact that remote security is increasingly becoming a major focus. Remote video monitoring was on the rise, but the realities of COVID-19 sped up that process. Baker cites his company’s 2021 Annual Report that found 75 percent of businesses surveyed reported using cloud-based video analytics — an increase of 8 percent from the year before. About 32 percent of respondents said they now used a remote security solution that is totally tied to the cloud. This all helps a business scale up. Baker says many security options are heading in the direction of a “software as a service” (SaaS) model — enabling for the flexibility of a subscription cloud service.

•   AI might help with staff shortages

We’ve all seen the headlines about mass staff shortages through all sectors of a global economy changed by the pandemic. Baker cites 31 percent of respondents in the Annual Report who say staff shortages were the “greatest challenge” of the past year. That number rose 20 percent since 2020. More than half — 55 percent — also said “staff shortage/sickness” stood as the “greatest operational challenge of the past year.” Baker says modern AI-powered security solutions can come to the rescue. Intelligent video analytics can serve as a 24/7 solution for monitoring security cameras in real time. AI can fill in the gaps left by a diminishing labor force.

•   Will the supply chain affect security in 2022?

The shift in the global supply chain function has had an impact on businesses. A big threat has been cargo theft as supply chains grind to a halt. Baker says modern security technology is being applied to help with these criminal threats but they come with some concerns of their own. The systems used to monitor and track vehicles and shipments are prone to hacks of their own. It’s a concern all of us in the security sector have to monitor as we progress through this year.

•   Physical and cybersecurity are becoming one

Baker points to something that has been evident for several years now — physical security and cybersecurity are becoming indelibly merged. Gone are the days when protecting cyber assets is a separate concern from shielding physical assets. Baker cites some have been leery to embrace this change. The 2021 Video Surveillance Report from IFSEC Global shows that 64 percent of respondents said cybersecurity concerns were a “barrier to cloud video adoption” for their security needs. While that is understandable, it’s crucial that proper training, education, and vetting be put in place within firms to ensure that cybersecurity standards are upheld — in this current market, one can’t afford to not augment physical with cybersecurity. The future isn’t either physical or cyber — it’s both.

Tags Calipsa, AI, remote video monitoring

Why the European Parliament is Looking at Facial Recognition Ban

October 21, 2021 Pete Cavicchia

From social media tagging on popular platforms like Facebook to a way to unlock your iPhone, facial recognition technology is an increasingly sophisticated tool utilized by nearly every major tech company. It has been a part of law enforcement, building security, and personal computing.

Now, the European Parliament is looking to reign in its use in public spaces.

Earlier this month, the European governing body called on police to pull back on its use of artificial intelligence (AI) services that use facial recognition — a call to limit the application of this tech in mass public surveillance programs.

Members of the parliament voted 377 in favor, 248 opposed on a non-binding resolution that asked European Union lawmakers to ban automatized facial recognition and put in place safeguards for how police forces use this AI, Engadget reports.

What these political leaders are saying is that everyday citizens should only be monitored by AI tools if they are suspected of an actual crime. They are suggesting this shouldn’t be an automatic protocol applied to all people in public spaces.

Engadget’s Kris Holt writes that the big concern centers on what is known as “algorithmic bias” in AI programs. The legislators are pointing to past research that suggests these kinds of facial recognition AI systems tend to misidentify minority ethnic groups, LGBTQ+ individuals, women, and senior citizens at higher rates than other people who are scanned by the same programs.

“Those subject to AI-powered systems must have recourse to remedy,” the resolution reads. They also are calling for a ban of private databases of facial recognition information and what is being called “predictive policing based on behavioral data.”

Holt adds that this latest resolution comes after recommendations earlier this summer from the European Data Protection Board and the European Data Protection Supervisor that said this tech should not use biometric data to classify people into “clusters based on ethnicity, gender, political or sexual orientation.”

Essentially, use of this AI could be mishandled in a discriminatory way, according to the Engadget writer.

What this news further underscores is that the use of ever more sophisticated AI technology will continue to be debated by policymakers and the public alike. As it becomes applied more and more in our daily lives, we will see calls for regulation, and discussions over how best it can be used.

Tags Facial Recognition, Engadget, AI

What New Tech to Fight Hackers Can Teach Us About Our Cybersecurity

June 27, 2021 Pete Cavicchia
earth-with-matrix-graphic.jpg

It sounds like something out of a science fiction film. Scientists just developed new technology that entraps hackers in an artificial, cyber “shadow world.” The goal is to prevent these cybercriminals from carrying through with their objectives by luring them into what is being defined as “an attractive — but imaginary — world.”

The cybersecurity technology is called “Shadow Figment,” and has been designed mainly to protect key physical targets like the electric grid, water systems, and pipelines, among other crucial aspects of our country’s infrastructure.

This groundbreaking tech was created by researchers at the U.S. Department of Energy’s Pacific Northwest National Laboratory (PNNL), according to a recent announcement.

Shadow Figment: A new era of national cybersecurity defense

Shadow Figment uses AI to keep attackers engaged in an illusory online world once they enter a system like the electrical grid. The hackers are led to believe they are interacting directly with users in real time, with the AI responding realistically to commands.

“Our intention is to make interactions seem realistic, so that if someone is interacting with our decoy, we keep them involved, giving our defenders extra time to respond,” said Thomas Edgar, a PNNL cybersecurity researcher who led the team designing Shadow Figment, in the announcement.

The AI utilized in this program is very sophisticated. Hackers will be given false signals of success, thinking they have accurately infiltrated a system. This gives a cybersecurity defense team time to learn about the hack itself and better fortify the real system. Think of it like a digital smokescreen, throwing the hackers off their game.

PNNL’s research team says this “model-driven dynamic deception” made possible by advanced machine learning is a more credible AI defense than “static decoys” that have more traditionally been a part of cyber defense.

The real-world threat of hackers

The PNNL stresses there is a pressing need for this kind of technology. In recent years, we’ve seen examples like the 2015 attack on Ukraine’s electrical grid as well as the hack of the Colonial pipeline here in the United States.

While this new technology can be a game changer in national defense, it further reiterates why we all need to be vigilant about our own cybersecurity hygiene.

We might not be able to deploy our own version of Shadow Figment, but we can still make sure we use unique passwords for all of our accounts and devices, set up two-factor authentication, and be judicious in what emails and links we open to avoid phishing scams and ransomware attacks.

These new innovations from the U.S. government can offer a helpful reminder of how pressing the threat of cybercriminals is in our daily lives and what we can do to defend ourselves.

Tags AI, Shadow Figment

How Cybersecurity Impacted Working from Home During COVID-19

June 3, 2021 Pete Cavicchia
Pete Cavicchia. Working from home.jpg

The past 12 months reoriented daily life in myriad ways. This is represented most starkly in how we work. In 2020, 62 percent of Americans worked from home, with 49 percent doing so for the first time, reports business website B2C.

That high volume of employees in the United States taking their work laptops home and from brought with it a 300 percent increase in cybercriminal activity targeting remote workers. The frequency of these hacks increased during the first six weeks of American quarantine and shelter-in-place orders early last spring.

The business website reports that 20 percent of companies experienced data breaches linked to these home-based workers.

If you’re a business owner — or even just an employee who still sees your home as your “office” for the foreseeable future — it is understandable that numbers like this give you cause for concern. With cybercrime targeting work from home on the rise, one bright spot appears to be the reality that cybersecurity etiquette is also on the rise.

Best practices for keeping your personal and professional data secure are part of the normalized parlance of office conversation. Now, it is becoming second nature for American workers to be cognizant of the importance of keeping their information protected from hackers.

Two-factor authentication, secure passwords, and wariness over phishing and ransomware attacks are increasingly a normalized part of professional life. In short, protecting your data isn’t just reserved for the company’s IT team.

CFO reports that companies are relying on the cloud. One example is the fact that organizations are using cloud-located intranets that use direct, private connections and even virtual desktop interfaces.

Artificial Intelligence (AI) and machine learning are also playing a needed role in identifying threats.

CFO cites a recent Infosecurity Magazine piece that shows how machine learning is detecting phishing attacks, referencing a cloud-based algorithm that scans email header messages to pinpoint what is known as “ratware,” or software that generates automatic mass messages. Then, another algorithm looks for phishing vocabulary in the body of an email. Eventually, this algorithm continues to grow more sophisticated, better picking up on suspicious emails as it collects more information about what is and isn’t malicious information hitting your Inbox.

While the rise of cybercrime is worrying, there is reason to hope. The deployment of canny AI made specifically to fight back against hackers coupled with increased cybersecurity literacy among America’s workforce hints at a future primed for a world that will continue to rely on working from home — safely.

Tags cybercrime, data, work, AI