AI is a double-edged sword in the cybersecurity world. One side of the argument is that it’s too early in its global development to provide meaningful safeguards against novel threats, expanding attack surface areas for even the most rudimentary threat actors.
Adversely, smart employment of AI in cybersecurity outfits could bolster defenses better than long-trusted, widely implemented security measures. If enterprises leverage AI for digital protection, teams must prepare AI and its databases with the resources they need for success as experts continue providing more comprehensive safety solutions. There is no time to waste when cyber threats are more dangerous, frequent, and unpredictable.
Studies accent the diversity, cleverness, and complexity of modern cyberattacks. Countless variants of the classic phishing scheme have troubled companies alongside swift and pervasive ransomware strains — which are up 148% since 2020. Here are some other relevant findings about recent cybersecurity landscapes:
If hackers use machine learning algorithms to speed up information gathering and processing and AI to suggest attack strategies based on those determinations, the time between severe, company-destroying attacks could go from months to minutes.
It’s straightforward for threat actors to increase the sheer quantity of potentially successful attacks with automation. Data utilization could result in more phishing emails or voice-replicating voicemails sent at precise times, like when employees log in, or exploiting backdoors that remain out of date and unprotected.
It’s a natural transition for threat actors to move to these tactics, especially after countless digital transformations of businesses in a post-COVID-19 digital sphere and the rise of new technologies like blockchain and cryptocurrency. Additionally, data and security regulations and benchmarks are behind as governments wrestle to come to productive agreements — giving hackers more chances to strike.
It’s not possible for manual cyber risk management to keep up with emerging cyber threats and tech. AI and other smart technologies can gracefully bridge the gap. The first way to push AI in the right direction is open communications with coders and developers. Outside of individual businesses, the internet is rife with flawed mechanisms and protocols.
Competitiveness in the cybersecurity sector has made companies quieter about their strategies when there should be more information and process sharing for the greater good. These mindsets should encourage communication between the world’s leading corporations and assessment of third-party vendors for their AI incorporation and cyber hygiene practices.
AI is adaptable. Hackers reverse engineer and use it because the way it executes an attack can shift slightly if a barrier appears. Defenders must make AI similarly work for them as it identifies new threats. Static defenses are not enough anymore — every resource must have reactivity as information comes in. If hackers can change their encryption techniques or signatures to persist through firewalls, then firewalls should identify the unique traits of these activities to adjust the way it responds.
The mentality insinuates defenders should adjust AI’s focus slightly. Instead of adding more walls for protection, make existing walls better at restricting access or containing threats. Cyber professionals have the resources to keep data and hardware safe — they need to get more well-rounded and proactive. Altering their behavior this way can eliminate the risks associated with overreliance on AI to execute cybersecurity.
Education is the best way to stay ahead of hackers using AI. Though experienced hacking syndicates employ knowledgeable attackers, professionals in cybersecurity have access to more resources to stay ahead of the curve. Defenders can gain industry expertise in AI that hackers lack access to.
After patching some inefficiencies of AI tools, teams can more comfortably rely on them for consistent, safe remediation and detection employment. First, AI can scan traffic. It can force routing requests and packages to undergo verification or have programming to deny unfamiliar addresses automatically. Utilizing this for automation purposes can eliminate countless alerts — preventing alert and process fatigue — from analysts who would previously manually evaluate each entrant.
AI can also scan more than network traffic. It could automate firmware and software scans and install updates to prevent patches from exposing entryways to hackers. If analysts want to review patch notes for updates before installation, they can also receive curated notifications from AI. Additionally, the AI could scan the update codes before they hit analysts, adding another set of eyes on potential vulnerabilities.
As it scans all that information, AI can intake greater amounts of data than humans could perform on a given workday. Companies have spread out data silos and fragmented information across data centers, cloud infrastructure and internal servers. Employees can’t review it for data minimization and continued data backups all the time — but AI could on a programmed schedule, saving analysts time and resources.
AI can also increase business resilience by improving ROI. Leveraging AI is an upfront cost, but it provides consistent gains. When CSOs and CFOs discuss cybersecurity budget allotments, it will be easy to argue in favor of AI when the cost of a cybersecurity breach is up by 80% — the most costly in history, and it doesn’t necessarily include the aftermath of media coverage and reputation salvaging.
Cyber threats will only calm down if cybersecurity and IT professionals set a precedent for how AI defenses can be stronger than hacker offenses.
When a new tech tool comes onto the scene, it will always result in a push and pull of attacker-defender utilization — but there are ways to make AI the best fit to serve protective measures instead of malicious intentions. Collaboration and care will result in the best AI assistance for the future of cybersecurity.
Eleanor Hecks is the editor of Designerly Magazine. Eleanor was the creative director and occasional blog writer at a prominent digital marketing agency before becoming her own boss in 2018. She lives in Philadelphia with her husband and dog, Bear.
If I had a penny for every time Google claimed that they take privacy seriously- even though in futility as several incidents point to the contrary, I ‘d have a company as huge as Google myself. The company’s recent settlement with the Federal Trade Commission for breaking privacy promises and its commitment in the past year to put up with 20 years of FTC privacy audits in lieu of “deceptive privacy practices” is making the company privacy deal with privacy with new improved and upgraded seriousness. In 2010 Google confessed over the revelations about the Street View cars data that it had been accumulating from open unsecured WIFI access points, after the incident Google appointed Alma Whitten to be its director of privacy, also added an information security awareness program for employees and started requiring engineering product managers to keep privacy design documented for every single project.
Now the company is further formalizing internal processes to test product privacy with the formation of a “red team” which is going to be a group that would attempt to challenge the organizations defenses in order to make them more effective. In the security world this is usually done as a way of penetration testing. For example, financial institutions often hire hackers to try and break into their systems in order to see where the cracks are and how can they be filled.
Recently, a Google job posting noted by Kaspersky Lab is calling for candidates to apply for the job of Data Privacy Engineer for the Privacy Red Team. The company hopes that the selected candidate would be expected to help ensure that Google products are designed to the highest possible standards and however they are operated the privacy of the users is protected. It also expects the candidate to work as a member of Privacy Red Team and independently locate research and help deal with potential privacy risks across all Google’s products, services and business processes functioning nowadays. However, the company’s response upon this search for new hires was rather shy as the spokesperson responded we are always looking for talented people for various roles.
However, the seriousness of Google’s much touted and newly rediscovered concern about privacy becomes dubious when other vying parties provide protections that Google does not, for example the Do Not Track setting, not that this setting is a complete protection and keeps all kinds of computer monitoring software, cell phone spy apps for one’s, etc away, but the fact that most other browsers are offering except for Chrome.
Natalia David has become a reliable name in the sphere of technology. Her work regarding cell phone security apps and PC security has earned her great recognition. You can also follow her on twitter @NataliaDavid4
We live in a world where digital interaction is the number 1 form of communication. With that speed, quickness and convenience comes certain security issue that always linger in the back of people’s minds. With infinite digital lines tieing the global community together eventually perpetrators will try to pry their way into those digital communication lines. The search bar is another one of those areas where Google spends time to secure.
Google has been trying for a few years now to protect search by making it much more protected than it already is.
“We’ve worked hard over the past few years to increase our services’ use of an encryption protocol called SSL, as well as encouraging the industry to adopt stronger security standards. For example, we made SSL the default setting in Gmail in January 2010 and introduced an encrypted search service located at https://encrypted.google.com four months later. Other prominent web companies have also added SSL support in recent months.
As search becomes an increasingly customized experience, we recognize the growing importance of protecting the personalized search results we deliver. As a result, we’re enhancing our default search experience for signed-in users. Over the next few weeks, many of you will find yourselves redirected to https://www.google.com (note the extra “s”) when you’re signed in to your Google Account. This change encrypts your search queries and Google’s results page. This is especially important when you’re using an unsecured Internet connection, such as a WiFi hotspot in an Internet cafe. You can also navigate to https://www.google.com directly if you’re signed out or if you don’t have a Google Account.”
With personalized search already here for Google and Bing the security threat is only growing stronger for those looking to tap into your search history. Google aims at changing that with their most recent security measures protecting your search bar.