There’s a long list of reasons US stability is now teetering between “Fyre Festival” and “Charlie Sheen’s ‘Tiger Blood’ era.” Now you can add cybersecurity to the tally. A crucial cyber defense law, the Cybersecurity Information Sharing Act of 2015 (CISA 2015), has lapsed. With the government out of commission, the nation’s computer networks are more exposed for… who knows how long. Welcome to 2025, baby.
CISA 2015 promotes the sharing of cyber threat information between the private and public sectors. It includes legal protections for companies that might otherwise hesitate to share that data. The law promotes “cyber threat information sharing with industry and government partners within a secure policy and legal framework,” a coalition of industry groups wrote in a letter to Congress last week.
As Cybersecurity Diveexplains, CISA 2015 shields companies from antitrust liability, regulatory enforcement, private lawsuits and FOIA disclosures. Without it, sharing gets more complicated. “There will just be many more lawyers involved, and it will all go slower, particularly new sharing agreements,” Ari Schwartz, cybersecurity director at the law firm Venable, told the publication. That could make it easier for adversaries like Russia and China to conduct cyberattacks.
Senator Rand Paul (R-KY)
(Kevin Dietsch via Getty Images)
Before the shutdown, there was support for renewal from the private sector, the Trump administration and bipartisan members of Congress. One of the biggest roadblocks was Sen. Rand Paul (R-KY), chairman of the Senate Homeland Security Committee. He objected to reauthorizing the law without changes to some of his pet issues. Notably, he wanted to add language that would neuter the ability to combat misinformation and disinformation. He canceled his planned revision of the bill after a backlash from his peers. The committee then failed to approve any version before the expiration date.
Meanwhile, House Republicans included a short-term CISA 2015 renewal in its government funding bill. But Democrats, whose support the GOP needs, wouldn’t support the Continuing Resolution for other reasons. They want Affordable Care Act premium tax credits extended beyond their scheduled expiration at the end of the year. Without an extension, Americans’ already spiking health insurance premiums will continue to skyrocket.
In its letter to Congress last week, the industry coalition warned that the expiration of CISA 2015 would lead to “a more complex and dangerous” security landscape. “Sharing information about cyber threats and incidents makes it harder for attackers because defenders learn what to watch for and prioritize,” the group wrote. “As a result, attackers must invest more in new tools or target different victims.”
Opinions expressed by Entrepreneur contributors are their own.
As artificial intelligence (AI) takes the world by storm, one particular facet of this technology has left people in both awe and apprehension. Deepfakes, which are synthetic media created using artificial intelligence, have come a long way since their inception. According to a survey by iProov, 43% of global respondents admit that they would not be able to tell the difference between a real video and a deepfake.
As we navigate the threat landscape in 2024, it becomes increasingly vital to understand the implications of this technology and the measures to counter its potential misuse.
The trajectory of deepfake technology has been nothing short of a technological marvel. Deepfakes were characterized by relatively crude manipulations in their infancy, often discernible due to subtle imperfections. These early iterations, though intriguing, lacked the finesse that would later become synonymous with the term “deepfake.”
As we navigate the technological landscape of 2024, the progression of deepfake sophistication is evident. This evolution is intricately tied to the rapid advancements in machine learning. The algorithms powering deepfakes have become more adept at analyzing and replicating intricate human expressions, nuances, and mannerisms. The result is a generation of synthetic media that, at first glance, can be indistinguishable from authentic content.
This heightened realism in deepfake videos is causing a ripple of concern throughout society. The ability to create hyper-realistic videos that convincingly depict individuals saying or doing things they never did has raised ethical, social, and political questions. The potential for these synthetic videos to deceive, manipulate, and mislead is a cause for genuine apprehension.
Earlier this year, Google CEO Sundar Pichai warned people about the dangers of AI content, saying, “It will be possible with AI to create, you know, a video easily. Where it could be Scott saying something or me saying something, and we never said that. And it could look accurate. But you know, on a societal scale, you know, it can cause a lot of harm.”
As we delve deeper into 2024, the realism achieved by deepfake videos is pushing the boundaries of what was once thought possible. Faces can be seamlessly superimposed onto different bodies, and voices can be cloned with uncanny accuracy. This not only challenges our ability to discern fact from fiction but also poses a threat to the very foundations of trust in the information we consume. A report by Sensity shows that the number of deepfakes created has been doubling every six months.
The impact of hyper-realistic, deepfake videos extends beyond entertainment and can potentially disrupt various facets of society. From impersonating public figures to fabricating evidence, the consequences of this technology can be far-reaching. The notion of “seeing is believing” becomes increasingly tenuous, prompting a critical examination of our reliance on visual and auditory cues as markers of truth.
In this era of heightened digital manipulation, it becomes imperative for individuals, institutions, and technology developers to stay ahead of the curve. As we grapple with these advancements’ ethical implications and societal consequences, the need for robust countermeasures, ethical guidelines, and a vigilant public becomes more apparent than ever.
Governments and industries globally are not mere spectators in the face of the deepfake menace; they have stepped onto the battlefield with a recognition of the urgency that the situation demands. According to reports, the Pentagon, through the Defense Advanced Research Projects Agency (DARPA), is working with several of the country’s biggest research institutions to get ahead of deepfakes. Initiatives aimed at curbing the malicious use of deepfake technology are currently in progress, and they span a spectrum of strategies.
One front in this battle involves the development of anti-deepfake tools and technologies. Recognizing the potential havoc that hyper-realistic synthetic media can wreak, researchers and engineers are tirelessly working on innovative solutions. These tools often leverage advanced machine learning algorithms themselves, seeking to outsmart and identify deepfakes in the ever-evolving landscape of synthetic media. A great example of this is Microsoft offering US politicians and campaign groups an anti-deepfake tool ahead of the 2024 elections. This tool will allow them to authenticate their photos and videos with watermarks.
Apart from that, industry leaders are also investing significant resources in research and development. The goal is not only to create more robust detection tools but also to explore technologies that can prevent the creation of convincing deepfakes in the first place. Recently, TikTok has banned any deepfakes of nonpublic figures on the app.
However, it’s essential to recognize that the battle against deepfakes isn’t solely technological. As technology evolves, so do the strategies employed by those with malicious intent. Therefore, to complement the development of sophisticated tools, there is a need for public education and awareness programs.
Public understanding of the existence and potential dangers of deepfakes is a powerful weapon in this fight. Education empowers individuals to critically evaluate the information they encounter, fostering a society less susceptible to manipulation. Awareness campaigns can highlight the risks associated with deepfakes, encouraging responsible sharing and consumption of media. Such initiatives not only equip individuals with the knowledge to identify potential deepfakes but also create a collective ethos that values media literacy.
As we stand at the crossroads of technological innovation and potential threats, unmasking deepfakes requires a concerted effort. It necessitates the development of advanced detection technologies and a commitment to education and awareness. In the ever-evolving landscape of synthetic media, staying vigilant and proactive is our best defense against the growing threat of deepfakes in 2024 and beyond.
Opinions expressed by Entrepreneur contributors are their own.
Education has always been at the forefront of societal progress, shaping the minds of future generations. In recent years, as we further delve into the modern age, the traditional classroom is undergoing a profound transformation. This digital shift in education has completely changed how we teach and learn, from tablets and interactive whiteboards to online learning environments and virtual reality. This shift, however, is not without difficulties.
The proliferation of mobile devices and cloud workspaces broadens the attack surface, making it easier for bad actors to access your network. Schools, universities and other educational institutions hold vast amounts of sensitive data, like academic records, student and parent addresses, phone numbers etc.
This makes them an enticing target for cyber attackers. Reports show that, from June 2022 to May 2023, there have been 190 known ransomware attacks against educational institutes. This is an 84% increase in attacks in the 6 months.
Apart from the monetary repercussions of such attacks, the danger to a student’s privacy, the damage to these institutes, and their impact on society is genuinely troublesome. For instance, last year, Lincoln College, Illinois, a 157-year-old institution that had survived two great wars, the Spanish flu, the great depression, and the Covid pandemic, became a victim of multiple ransomware attacks and was finally forced to shut down.
So, the seriousness of cybersecurity in education cannot be understated. Fortunately, cybersecurity training in schools has been gaining steam recently. In March, the governor of North Dakota signed a bill that makes cybersecurity training a mandatory part of the curriculum for K-12 students. However, safeguarding the privacy and securing endpoints and networks while providing an unhindered learning experience is tricky.
The balancing act between security, privacy and productivity
In an increasingly interconnected world, where technology is deeply integrated into education, protecting students, institutions and their data is a prime concern.
The backbone of any institution’s security lies in its network infrastructure. The network infrastructure of every organization serves as the foundation for its cybersecurity. Strong firewalls, intrusion detection systems, secure network access controls and threat prevention systems are essential components of a secure network. Furthermore, to avoid unauthorized access and data breaches, monitoring the network and fixing any vulnerabilities regularly is essential.
Along with the network, securing the endpoints is also pivotal as more and more schools provide computers, tablets, or mobile devices in the classroom. Instituting policies that require the use of strong, periodically updated passwords and regularly applying security patches and updates to operating systems are essential to keeping these devices secure. Alternatively, utilizing a Unified Endpoint Management (UEM) solution will provide endpoint security features such as enforcing strict password policies remotely pushing app and OS updates or patches etc.
Apart from securing endpoints, when the number of devices keeps increasing, managing them and ensuring they are not misused creates another hurdle. Provisioning all school-owned devices with a UEM allows administrators to hone these devices into focused learning tools. Its app management capabilities help push essential applications to the devices directly from the UEM console without any external user intervention.
Moreover, any undesirable apps could be blocked or restricted from being installed on the devices. The web content filtering capability does the same with websites, preventing students from visiting unwanted or malicious sites. A UEM supporting multiple operating systems also removes the hassle of using a different solution for each OS.
Educational institutions will always have a significant quantity of sensitive and personal data. Therefore, it is imperative to protect this data to retain the privacy and confidence of students, parents, and staff. The scariest part is that losing sensitive data, such as student records, can put students or their families at risk of dangerous attacks such as phishing scams or even identity thefts. One way to prevent this is to employ strong data storage procedures and encrypt data at rest and in transit.
To that extent, deploying a Data Loss Prevention (DLP) solution goes a long way in protecting data privacy. A major element of avoiding breaches of such nature consists of closely monitoring the flow of sensitive data. DLP systems can help these institutes track and protect their data by enforcing preconfigured policies. Additionally, institutions must make it a top priority to comply with data privacy laws like the Family Educational Rights and Privacy Act (FERPA), the General Data Privacy Regulation (GDPR) or other legislations based on your location.
Finally, no cybersecurity system is foolproof, which is why educational institutions must have a well-defined incident response and disaster recovery plan in place. The effect of a potential cybersecurity incident can be reduced by regularly backing up important data and testing disaster recovery plans, ensuring that the institution can recover quickly and carry on with business as usual. In the event of such an attack, having a cyber insurance policy offers a solution to diminish the fallout. An insurance policy covers the monetary expense in the face of ransomware, data breaches and other cybersecurity dilemmas.
Promoting a culture of cybersecurity awareness
A predestined step in constructing a formidable cyber defense is developing a culture of cybersecurity awareness. Strong password usage, recognizing phishing attempts, and preserving personal information are just a few of the safe online habits that may be inculcated through regular training sessions and awareness programs. The changes brought by North Dakota in its curriculum and pedagogy are a palpable example of promoting cybersecurity awareness.
As educational institutions become more dependent on technology, addressing cybersecurity is not an option—it is a necessity. Schools and colleges move towards a more secure zero trust-based architecture by fostering a culture of cybersecurity awareness, installing secure network architecture, preserving data and privacy, enhancing endpoint security and developing proactive incident response procedures. With cyber scams getting more problematic to identify, going forward with a zero-trust mentality can bolster their security architecture and protect their students and data.
The cybersecurity world faces new threats beyond targeted ransomware attacks, according to experts at the recent RSA cybersecurity industry conference in San Francisco.
Joe McMann, head of cybersecurity services at Binary Defense, a cybersecurity solutions provider, said the new battleground is data extortion and companies need to shift gears to face the threat.
Traditionally, ransomware attackers encrypt or delete proprietary data of organizations and ask for ransom before reverting the attack. McMann said hackers are now focusing on stealing customer or employee data and then threatening to leak it publicly.
“By naming, shaming, threatening reputational impact, they force the hands of their targets,” McMann said.
The International Data Corporation predicts firms will spend over $219 billion on cybersecurity this year, and McMann said cybercriminals constantly evolve their exploitations.
Hackers shifted tactics after ransomware attacks brought an unwelcome level of visibility by law enforcement and governments, and cybersecurity professionals became adept at solving decryption. Instead of paralyzing hospitals and pipelines, he said criminals changed gears to collect data and threaten companies with customer dissatisfaction and public outcry.
At the end of March, OpenAI documented a data leak in an open-source data provider that made it possible to see personal AI chat histories, payment information, and addresses. The team patched the leak in hours, but McMann said once data is out there, hackers can use it.
Chris Pierson, founder and CEO of Black Cloak, a digital executive protection company, said companies understand the growing threat of data extortion after public breaches. In the past year alone, he said Twilio, LastPass, and Uber all faced attacks that saw hackers targeting employees outside corporate security protection.
“For example, the LastPass breach saw one of four key individuals targeted on their personal computer, through a personal public IP address getting in through an unpatched solution,” he said.
The hackers stole credentials “outside the castle wall environment, on personal devices,” he said, using that data months later as a way into the corporate environment.
He said the advent of home offices accelerated employee targeting. As every company transformed into a digital-first world, employees naturally started working on personal devices.
Before the pandemic, Fortune 500 companies spent millions to secure corporate devices and buildings, but employees are not as well protected at home. “The moment an executive walks out of the building, uses their personal device or home network that they share with corporate devices, the attack surface changes,” Pierson said. What’s more, digital footprints are easy to find online, he said. “40% of our corporate executives’ home IP addresses are public on data broker websites.”
Pierson said it only takes one vulnerable device on a home network to open up the entire network.
Looking across the street at the RSA convention building filled with more than 45,000 industry attendants, Pierson said criminals always choose the path of least resistance.
“You don’t have to go in through all the gear that’s out here at RSA protecting the actual company; you go through the $5 of cybersecurity at home and get everything else,” Pierson said. “Cybercriminals are targeting at a personal level because they know they can get the data, and there are no controls out there,” he added.
There is higher visibility for cybersecurity this year with an increased number of phishing attempts and scam messages a daily occurrence for most people. And companies know that new SEC proposed guidelines will add another layer of accountability.
When finalized, the rules would require public firms to disclose data breaches to investors within four days, and have at least one cybersecurity-experienced board member. Though a Wall Street Journal survey found three-fourths of respondents had a cybersecurity director, Pierson said companies were at RSA looking for advice.
McMann said companies should focus on the simple fixes first and not worry about AI chat breaches if they aren’t using two-factor authentication on personal accounts. Criminals will first try older methods like ransomware before moving on to new ones.
He said practicing for cyberattacks has become as important as any other emergency drill. On a positive note, McMann said the success of cybersecurity professionals is why criminals are looking for new modes of attack.
“If you don’t have your operations streamlined and effective, if you don’t have good people and processes in place, don’t worry about the other stuff,” he said. “There’s a lot of fundamentals that get skipped.”
Well, it depends on who you ask. Cyberattack damage will rise to $10.5 trillion by 2025, and security advocates say you can never be too careful when fortifying your data and devices. Of course, cybersecurity on both home and office devices is essential to navigating any digital space, and it’s vital to keep one’s information and sensitive data protected. But in day-to-day life, trying to keep your devices secured can quickly get convoluted.
That’s partially because of the disparate state of the cybersecurity industry. Users are spoiled with protection options from multi-factor authentication (MFA) to VPNs, password managers and good old antivirus programs. But the issue doesn’t come from the selection available. Rather, it’s that most of these cybersecurity tools are not in conversation with each other.
Yes, having your cybersecurity products connected can put them at risk to some extent should one of them become compromised. However, when an individual exclusively uses a password manager, a Google-generated “difficult” password, or MFA on one single account, are they really any safer?
Likewise, if a cybersecurity feature a consumer uses gets compromised or hacked, it could discourage them from exploring other security products while they cope with being burned by a clever hacker. Of the millions of accounts exposed in the LastPass breach, many of the consumers using the program probably assumed they were properly fortifying their devices and sensitive information.
Although it’s likely not the best idea to merge every cybersecurity measure under one umbrella, entrepreneurs should see the value in trying to connect the industry’s loose threads.
Making cybersecurity more seamless could end up keeping more people safe in the long run. Building bridges to improve user experience and creating solutions that cover multiple bases also spreads out the long-term viability of a cybersecurity company by expanding its security reach.
If an entire security company’s business rests on the stability and success of one product, it will undoubtedly lose revenue and consumer trust should that one product get breached. And they would need plenty of luck to build up that goodwill without the PR artillery that Big Tech companies have.
Another factor to consider in helping unify cybersecurity lies in its cost. While many programs operate through donations or are free to use in exchange for user data, most serious cybersecurity products come with a price tag.
Around 61% of users in the U.S. rely on free antivirus software, according to an annual report from Security.org. No surprise there, but the same report states roughly 33 million households pay for some type of security software, albeit with no distinction as to how that is spread across VPNs, secure browsers, and other features. This indicates users are willing to pay for personal protection, but only for certain kinds of products.
Likewise, while an individual might pay for an antivirus program or a VPN, it can be hard to convince users to pay for multiple security products unless the individual is a business owner or regularly deals with highly sensitive information.
Outside of home-bound device security, mobile devices have also pushed privacy and security issues to the forefront of tech conversations as they reach near-universal use. Consumers, in general, have become much wearier about their data privacy and how to secure smartphones from malware and attacks, given how much personal information these devices now hold.
But most people don’t read the permissions they allow apps and programs to access on devices, and many don’t go the extra mile to secure their phones outside of the built-in safeguards developed by Apple or Android. As more users search for ways to “declutter” their mobile experience, this shows another clear gap in cybersecurity interoperability.
Companies such as privacy-preserving mobile developer Unplugged are already banking on the need for cybersecurity convergence, offering a multi-pronged app suite to boost mobile and desktop privacy and security. The project operates through a subscription-based model, which creates a new pathway to access high-level security products without having to pay exorbitant fees for each new program.
Despite the siloing of cybersecurity, changes are clearly on the horizon from both a developer and regulatory level. In March 2023 alone, the U.S. government unveiled a beefed-up National Cybersecurity Strategy to set new regulatory standards and corporate responsibilities surrounding cybersecurity. The extensively-updated strategy outlines key pillars, including support for critical infrastructure, addressing the cybersecurity skills gap, setting regulatory baselines and fostering collaboration between the public and private sectors.
Although we have yet to see how these new frameworks will affect consumer-level cybersecurity, the U.S. government, echoing collaboration and connection, shows its necessity in building a resilient cybersecurity future.
Security should be a tenet of any tech product, given how sophisticated attacks can get. As more facets of our daily lives move to the digital realm, there is an imperative to improve security processes before it turns catastrophic. Entrepreneurs should be considering projects in this sector that are working to build common ground and security seamlessness to cut through the general malaise that users might have around protecting their devices.
Anyone who depends on LinkedIn to search for jobs, find business partners or other opportunities is probably aware that the business social media site has had issues with fake profiles. While that is no different than other social media platforms including Twitter and Facebook, it presents a different set of problems for users who look to use LinkedIn for professional purposes.
Between January 1 and June 30, more than 21 million fake accounts were detected and removed from LinkedIn, according to the company’s community report. While 95.3% of those fake accounts were stopped at registration by automated defenses, according to the company, there was a nearly 28% increase in fake accounts caught compared to the previous six-month period. LinkedIn says it currently has more than 875 million members on its platform.
While the Microsoft-owned professional social media platform has rolled out new features in recent months to help users better determine if someone contacting them is a real or fake profile, cybersecurity experts say there are several things that users on the platform can do to protect themselves.
Creators of fake LinkedIn profiles sometimes try to drive engagement through content that links to malicious sites, said Mike Clifton, executive vice president and chief information and digital officer at Alorica, a global customer service outsourcing firm.
“For example, we see those that revolve around posts and content promoting a work event, such as a webinar, that uses real photos and people’s real information to legitimize the information and get others to register, often on a fake third-party Web site,” Clifton said.
How to avoid getting duped by fraudulent profiles
Cybercriminals often rely on a human touch to give LinkedIn users the impression that the fake profile belongs to someone they know, or is two degrees removed from someone they know. “This has been going on for years, and at this point can still evade even sophisticated fraud detectors,” Clifton said. “Like we remind our employees and customers, it’s important to stay vigilant and engage cautiously on social networks to protect your information.”
Recruiters who rely heavily on LinkedIn to search for prospective employees can find fake profiles especially troublesome, said Akif Khan, vice president and analyst at research firm Gartner.
“In addition, in other areas of fraud management — for example, when suspicious ecommerce transactions are being manually reviewed — agents will look across social media sites including LinkedIn to try and see if [a] person has a credible digital footprint which would suggest that they are a real-person rather than a fake identity,” Khan said.
For these reasons it can serve the purposes of bad actors to have fake LinkedIn profiles, Khan said.
Gartner is seeing the problem of phony accounts across all social media platforms. “Bad actors are trying to craft fake identities and make them look real by leaving a plausible-looking digital footprint across different platforms,” Khan said.
It’s more likely that the fake profiles are set up manually, Khan said, however, where bad actors are creating large numbers of fake profiles — which can be used to abuse advertising processes or to sell large volumes of followers or likes on-demand — they’ll be using bots to automate that process of creating fake accounts.
The challenge for LinkedIn users is that profiles on social media platforms are easy to create and are typically not verified in any way. LinkedIn has asked users who encounter any content on the platform that looks like it could be fake to report it to the company. Users should specifically be on the lookout for profiles with abnormal profile images or incomplete work history, and other indicators including inconsistencies in the profile image and education.
“Always seek corroboration from other sources if you’re looking at an account and are making decisions based on what you see,” Khan said. “The bigger issue here is for the platforms themselves. They need to ensure that they have appropriate measures in place to detect and prevent automated account creation, particularly at large scale.”
What LinkedIn is doing to detect fakes and bots
Tools for detection do exist, but using them is not an exact science. “Verifying the identity of a user when creating an account would be another effective way to make it more difficult to set up fake accounts, but such identity proofing would have an impact in terms of cost and user experience,” Khan said. “So these platforms are trying to strike a balance in terms of the integrity of accounts and not putting users off creating accounts,” he said.
LinkedIn is taking steps to address the fake accounts problem.
The site is using technology such as artificial intelligence along with teams of experts to remove policy-violating content that it detects before the content goes live. The vast majority of detected fake accounts are caught by automated defenses such as AI, according to a blog post from Oscar Rodriguez, vice president of product management at LinkedIn.
LinkedIn declined to comment further.
The company is also collaborating with peer companies, policymakers, law enforcement and government agencies in efforts to prevent fraudulent activity on the site.
In its latest effort to stop fake accounts, LinkedIn rolled out new features and systems in October to help users make more informed decisions about members that they are interacting with, as well as enhancing its automated systems that keep inauthentic profiles and activity off the platform.
An “about this profile” feature shows users when profiles were created and last updated, along with information about whether the members had verified phone numbers and/or work emails associated with their accounts. The goal is that viewing this information will help users in deciding whether to accept a connection request or reply to a message.
LinkedIn says rapid advances in AI-based synthetic image generation technology have led to the creation of a deep learning model to better catch profiles made with AI. AI-based image generators can create an unlimited number of unique, high-quality profile photos that do not correspond to real people, Rodriguez wrote in the blog post, and phony accounts sometimes use these convincing, AI-generated profile photos to make a profile appear more authentic.
The deep-learning model proactively checks profile photo uploads to determine if an image is AI-generated, using technology designed to detect subtle image artifacts associated with the AI-based synthetic image generation process — without performing facial recognition or biometric analyses, Rodriguez wrote.
The model helps increase the effectiveness of LinkedIn’s automated anti-abuse defenses to help detect and remove fake accounts before they can reach members.
The company also added a warning to some LinkedIn messages that include high-risk content that could impact user security. For example, users might be warned about messages that ask them to take conversations to other platforms, because that might be a sign of a scam.