ReportWire

Tag: cybersafety

  • Deepfakes Are on the Rise — Will They Change How Businesses Verify Their Users? | Entrepreneur

    Deepfakes Are on the Rise — Will They Change How Businesses Verify Their Users? | Entrepreneur

    Opinions expressed by Entrepreneur contributors are their own.

    You know how you can’t do anything these days without proving who you are? Whether opening a bank account or just hopping onto a car-sharing service. With online identity verification becoming more integrated into daily life, fraudsters have become more interested in outsmarting the system.

    Criminals are investing more money and effort to overcome security solutions. Their ultimate weapon is deepfakes — impersonating real people using artificial intelligence (AI) techniques. Now, the multi-million question is: Can organizations effectively employ AI to combat fraudsters with their tools?

    According to a Regula identity verification report, a whopping one-third of global businesses have already fallen victim to deepfake fraud, with fraudulent activities involving deepfake voice and video posing significant threats to the Banking sector.

    For instance, fraudsters can easily pretend to be you to get access to your bank account. Stateside, almost half of the companies surveyed confessed to being targeted with the voice deepfakes last year, beating the global average of 29%. It’s like a blockbuster heist but in the digital realm.

    And as AI technology for creating deepfakes becomes more accessible, the risk of businesses being affected only increases. That poses a question: Should the identity verification process be adjusted?

    Related: Deepfake Scams Are Becoming So Sophisticated, They Could Start Impersonating Your Boss And Coworkers

    Endless race

    Luckily, we’re not at the “Terminator” stage yet. Right now, most deepfakes are still detectable — either by eagle-eyed humans or AI technologies that have already been integrated into ID verification solutions for quite some time. But don’t let your guard down. Deepfake threats are evolving quickly — we are already on the edge of witnessing persuasive samples that can scarcely arouse any suspicion, even upon deliberate scrutiny.

    The good news is that the AI, the superhero we’ve enlisted to fight against good old “handmade” identity fraud, is now being trained to spot fake stuff created by its fellow AI buddies. How does it manage this magic? First of all, AI models don’t work in a vacuum; human-fed data and clever algorithms shape them. Researchers can develop AI-powered tools to remove the bad guys of synthetic fraud and deepfakes.

    The core idea of this protective technology is to be on the lookout for anything fishy or inconsistent while doing those ID liveness checks and “selfie” sessions (where you snap a live pic or video with your ID). An AI-powered identity verification solution becomes the digital Sherlock Holmes. It can detect both changes that occur over time, like shifts in lighting or movement, and sneaky changes within the image itself – like tricky copy-pasting or image stitching.

    Fortunately, AI-generated fraud still has some blind spots, and organizations should leverage those weak points. Deepfakes, for instance, often fail to capture shadows correctly and have odd backgrounds. Fake documents typically lack optically variable security elements and would fail to project-specific images at certain angles.

    Another key challenge criminals face is that many AI models are primarily trained using static face images, mainly because those are more readily available online. These models struggle to deliver realism in liveness “3D” video sessions, where individuals must turn their heads.

    One more vulnerability organizations can use is the difficulty in manipulating documents for authentication compared to attempting to use a fake face (or to “swap a face”) during a liveness session. This is because criminals typically have access only to one-dimensional ID scans. Moreover, modern IDs often incorporate dynamic security features that are visible only when the documents are in motion. The industry is constantly innovating in this area, making it nearly impossible to create convincing fake documents that can pass a capture session with liveness validation, where the documents must be rotated at different angles. Hence, requiring physical IDs for a liveness check can significantly boost an organization’s security.

    While the AI training for ID verification solutions keeps evolving, it’s essentially a constant cat-and-mouse game with fraudsters, and the results are often unpredictable. It is even more intriguing that criminals are also training AI to outsmart enhanced AI detection, creating a continuous cycle of detection and evasion.

    Take age verification, for example. Fraudsters can employ masks and filters that make people appear older during a liveness test. In response to such tactics, researchers are pushed to identify fresh cues or signs of manipulated media and train their systems to spot them. It’s a back-and-forth battle that keeps going, with each side trying to outsmart the other.

    Related: The Deepfake Threat is Real. Here Are 3 Ways to Protect Your Business

    Maximum level of security

    In light of all we’ve explored thus far, the question looms: What steps should we take?

    First, to achieve the highest level of security in ID verification, toss out the old playbook and embrace a liveness-centric approach for identity checks. What’s the essence of it?

    While most AI-generated forgeries still lack the naturalness needed for convincing liveness sessions, organizations seeking maximum security should work exclusively with physical objects — no scans, no photos — just real documents and real people.

    In the ID verification process, the solution must validate both the liveness and authenticity of the document and the individual presenting it.

    This should also be supported by an AI verification model trained to detect even the most subtle video or image manipulations, which might be invisible to the human eye. It can also help detect other parameters that could flag abnormal user behavior. This involves checking the device used to access a service, its location, interaction history, image stability and other factors that can help verify the authenticity of the identity in question. It’s like piecing together a puzzle to determine if everything adds up.

    And one final tip – requesting that customers use their mobile phones during liveness sessions instead of a computer’s webcam would be helpful. This is because it is generally much more difficult for fraudsters to swap images or videos when using a mobile phone’s camera.

    To wrap it up, AI is the ultimate sidekick for the good guys, ensuring the bad guys can’t sneak past those defenses. Still, AI models need guidance from us humans to stay on the right track. But when together, we are superb at spotting fraud.

    Ihar Kliashchou

    Source link

  • Why the Online Dating Experience Needs to Change | Entrepreneur

    Why the Online Dating Experience Needs to Change | Entrepreneur

    Opinions expressed by Entrepreneur contributors are their own.

    In today’s world, online dating has become the norm for many singles looking for love. However, despite the growing popularity of dating apps, many problems still plague the industry. From ghosting to toxic behavior, online dating can be a frustrating and exhausting experience for many users. In this article, we explore why we need a new online dating experience and how it can revolutionize how we connect with others.

    Turn dating apps from a chore into a thrill

    A study by Singles found that 78.37% of adults have experienced online dating burnout. We got used to spending more time since the pandemic’s beginning, but it takes its toll on everyone’s mental health and overall online dating excitement. Instead of mindlessly swiping through profiles, dating apps might incorporate gamification elements that can increase user engagement and help foster more authentic connections. By adding games or challenges, users can interact with each other in a playful and non-intimidating way, helping to break the ice and create a more relaxed atmosphere.

    Additionally, interesting prompts can help spark conversation and allow users to showcase their personality and interests beyond just a few photos and a bio. These elements can ultimately lead to more meaningful connections and a more positive overall dating experience.

    Related: 7 Ways Dating Apps Are Lying To You

    Silent treatment, leaving (dating apps) users disconnected and frustrated

    Dealing with sudden silence or breadcrumbing is a common frustration for dating app users, with 43% of dating app users in the United States reporting having been ghosted at least once. Breadcrumbing is also prevalent, with 22% of respondents reporting that they have experienced it.

    To address these issues, dating apps have introduced new features. For example, some apps use machine learning algorithms to detect potentially offensive or inappropriate messages, while others offer read receipts to let users know when their messages have been read. By implementing such features, dating apps are helping to reduce the negative experiences that users often encounter on their platforms.

    Get it right, or avoid being matched with a wrong type

    Matching with the wrong type of person is a common problem on dating apps. Often, the issue lies in how users portray themselves on their profiles. A survey conducted at the end of 2022 found that nearly 47% of respondents had lied on their dating profiles. The solution to this problem is for dating apps to encourage users to be more authentic and showcase their true lifestyle. For example, some dating apps have a feature that allows users to upload videos to their profiles, giving potential matches a better sense of who they are.

    Related: Online Dating Scammer Steals $1.8 Million from His Victims

    Toxic behavior is lurking around every corner

    Many online dating users are experiencing harassment or verbal abuse. According to a survey by Pew Research Center, 41% of women have experienced online harassment. Dating apps need to come up with solutions to stop it. This can include introducing reporting features, moderating user-generated content and collaborating with organizations that promote online safety.

    Shield yourself from fraud and outsmart scammers

    Scammers are a growing problem on dating apps, with many users falling victim to fraud. As reported by the Federal Trade Commission, romance scams resulted in $1.3 billion in losses in 2022 — median $4400. The solution to this problem is for dating apps to introduce better verification processes. This can include verifying users’ identities through social media or requiring users to take a selfie to prove they are who they say they are. For example, a combination of a phone number-only registration with a photo verification can narrow fake profiles to the minimum.

    Related: Your Identity Could Be Used in Online Dating Scams. Here’s How to Protect Yourself

    Is online dating on the verge of failing or blossoming?

    In conclusion, the world of online dating is constantly evolving, but many of the problems that plague existing dating apps persist. These issues can lead to frustration, disappointment, and even harm. We need a new online dating experience that prioritizes fun, authenticity, safety, and connection. Whether through gamification, better communication formats or more authentic user profiles, there are ways to create a better online dating experience for all. By recognizing and addressing the common problems of dating apps, we can create an environment that fosters healthy relationships and real connections. The time has come for a change in the online dating landscape, and we’re excited to be a part of it.

    Marina Anderson

    Source link