ReportWire

Tag: understanding ai

  • Ohio lawmaker proposes comprehensive ban on marrying AI systems and granting legal personhood

    [ad_1]

    NEWYou can now listen to Fox News articles!

    An Ohio lawmaker is taking aim at artificial intelligence in a way few expected. Rep. Thaddeus Claggett has introduced House Bill 469, which would make it illegal for AI systems to be treated like people. The proposal would officially label them as “nonsentient entities,” cutting off any path toward legal personhood.

    And yes, it also includes a ban on marrying AI.

    Claggett, a Republican from Licking County and chair of the House Technology and Innovation Committee, said the measure is meant to keep humans firmly in control of machines. He says that as AI systems begin to act more like humans, the law must draw a clear line between person and program.

    TEENS TURNING TO AI FOR LOVE AND COMFORT

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter 

    What Ohio’s AI marriage ban would do

    Under the proposed legislation, AI systems would not be able to own property, manage bank accounts or serve as company executives. They would not have the same rights or responsibilities as people. The bill also makes any marriage between a human and an AI, or between two AI systems, legally impossible.

    Ohio lawmakers consider a bill to ban AI from being recognized as a person. (Cyberguy.com)

    Claggett believes the concern is not about robot weddings happening anytime soon. Instead, he wants to prevent AI from taking on the legal powers of a spouse, such as holding power of attorney or making financial and medical decisions for someone else.

    The bill also specifies that if an AI causes harm, the human owners or developers would be responsible. That means a person cannot blame their chatbot or automated system for mistakes or damage. Responsibility stays with the humans who built, trained or used the system.

    Why Ohio is taking action on AI personhood

    The timing of the bill is not random. AI is spreading fast across nearly every industry. Systems now write reports, generate artwork and analyze complex data at lightning speed. Ohio has even started requiring schools to create rules for AI use in classrooms. And major data centers are being built to power AI infrastructure in the state.

    At the same time, AI is becoming more personal. A survey by Florida-based marketing firm Fractl found that 22 percent of users said they had formed emotional connections with a chatbot. Three percent even considered one a romantic partner. Another 16 percent said they wondered whether the AI they were talking to was sentient.

    That kind of emotional attachment raises red flags for lawmakers. If people start believing AI has feelings or intent, it blurs the boundaries between human experience and digital simulation.

    Wedding rings

    Ohio lawmakers consider a bill to ban AI from being recognized as a person. (iStock)

    AI COMPANIONS REPLACE REAL FRIENDS FOR MANY TEENS

    The bigger picture: Keeping humans in control

    Claggett said the bill is about protecting human agency. He believes that as AI grows smarter and more capable, it must never replace the human decision-maker. 

    Claggett told CyberGuy, “We see AI as having tremendous potential as a tool, but also tremendous potential to cause harm. We want to prevent that by establishing guardrails and a legal framework before these developments can outpace regulation and bad actors start exploiting legal loopholes. We want the human to be liable for any misconduct, and for there to be no question regarding the legal status of AI, no matter how sophisticated, in Ohio law.”

    The proposed law would also reinforce that AI cannot make choices that affect human lives without oversight.

    If passed, it would ensure that no machine can act independently in matters of marriage, property, or corporate leadership. Supporters see the bill as a safeguard for society, arguing that technology should never gain the same legal footing as people.

    Critics, however, say the proposal might be a solution to a problem that doesn’t yet exist. They warn that overly broad restrictions could slow down AI research and innovation in Ohio.

    Still, even skeptics admit that the conversation is necessary. AI is evolving faster than most laws can keep up, and questions about rights, ownership and accountability are becoming harder to ignore.

    What other states are doing about AI personhood

    Ohio isn’t alone in pushing back against AI personhood. In Utah, lawmakers passed H.B. 249, the Utah Legal Personhood Amendments, which prohibits courts and government entities from recognizing legal personhood for nonhuman entities, including AI. The law also bars recognizing personhood for entities such as bodies of water, land and plants.

    In Missouri, legislators introduced H.B. 1462, the “AI Non-Sentience and Responsibility Act,” which would formally declare AI systems non-sentient and prevent them from acquiring legal status, marriage rights, corporate roles or property ownership.

    AI-GENERATED ATTORNEY OUTRAGES JUDGE WHO SCOLDS MAN OVER COURTROOM FAKE: ‘NOT A REAL PERSON’

    In Idaho, H.B. 720 (2022) includes language that reserves legal rights and personhood for human beings, effectively barring personhood claims by nonhumans, including AI.

    These measures reflect a broader trend among state governments. Many legislators are trying to get ahead of AI’s development by setting clear legal boundaries before the technology becomes more advanced.

    Taken together, these proposals show that Ohio’s effort is part of a larger national movement to define where technology ends and legal personhood begins.

    Iron the robot 1

    House Bill 469 aims to keep humans in control as AI becomes more lifelike. (XPENG)

    What this means for you

    If you live in Ohio, House Bill 469 could influence how you use and interact with artificial intelligence. It sets clear boundaries that keep AI as a tool rather than a person. By keeping decision-making and responsibility in human hands, the law aims to avoid confusion about who is accountable when technology fails. If an AI system causes harm or makes an error, the responsibility stays with the humans who designed or deployed it.

    For Ohio businesses, this proposal could lead to real changes in daily operations. Companies that depend on AI to handle customer support, financial decisions, or creative projects may need to review how much authority those systems have. It may also require stricter policies to ensure that a human is always supervising important decisions involving money, health, or law. Lawmakers want to keep people firmly in charge of choices that affect others.

    For everyday users, the message is straightforward. AI can be useful, but it cannot replace human relationships or legal rights. This bill reinforces that no matter how human-like technology appears, it cannot form genuine emotional or legal bonds with people. Conversations with chatbots might feel personal, but they remain simulations created through data and programming.

    DETAILS OF TRUMP’S HIGHLY ANTICIPATED AI PLAN REVEALED BY WHITE HOUSE AHEAD OF MAJOR SPEECH

    For people outside Ohio, this proposal could point to what is coming next. Other states are closely watching how the bill develops, and some may adopt similar laws. If it passes, it could set a national example for defining the legal limits of artificial intelligence. What happens in Ohio may shape how courts, businesses and individuals across the country decide to manage their connection to AI in the years ahead.

    In the end, this debate is not limited to one state. It raises an important question about how society should balance the power of innovation with the need to protect human control.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com 

    Kurt’s key takeaways

    Ohio’s House Bill 469 is bold, controversial and timely. It challenges us to define the limits of what technology should be allowed to do. Claggett’s proposal is not about stopping innovation. It’s about ensuring that as machines become more capable, humans remain in charge of the choices that shape society. The debate is far from over. Some see this as a necessary safeguard, while others believe it underestimates what AI can contribute. But one thing is certain: Ohio has thrown a spotlight on one of the biggest questions of our time.

    CLICK HERE TO GET THE FOX NEWS APP

    How far should the law go in deciding what AI can never be? Let us know by writing to us at Cyberguy.com

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter

    Copyright 2025 CyberGuy.com.  All rights reserved.  

    [ad_2]

    Source link

  • Teens turning to AI for love and comfort

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Artificial intelligence (AI) is no longer just helping students with homework. A new survey from the Center for Democracy and Technology found that nearly one in five high school students in the United States say they or someone they know has used AI to have a romantic relationship. The results shocked researchers and raised big questions about how deeply AI tools are affecting young minds. The report, which surveyed 1,000 students, 1,000 parents and 800 teachers, reveals how AI has quietly become a companion in students’ personal lives.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    TEENS INCREASINGLY TURNING TO AI FOR FRIENDSHIP AS NATIONAL LONELINESS CRISIS DEEPENS

    Teens say they feel safer opening up to chatbots than real people, a growing emotional shift researchers didn’t expect. (Kurt “CyberGuy” Knutsson)

    When AI becomes a “friend”

    Nearly half of the students said they use AI to talk about emotions, friendships or mental health. Many admit they feel more comfortable opening up to a chatbot than to a parent or friend. Even more alarming, two-thirds of parents said they have no idea how their kids are using AI. Experts warn that while AI can simulate empathy, it has no real understanding or care. According to researchers, students need to remember that they are not actually talking to a person. They are interacting with a programmed tool that has clear limitations and cannot truly understand human emotions.

    AI in schools: Help or harm?

    AI tools are everywhere in schools. About 85% of teachers and students said they used AI during the last school year. While schools introduce AI to boost learning, this exposure may have a downside. Students who use AI more often in class are also more likely to turn to it for emotional or personal reasons. Teachers and parents are worried that regular chatbot use could weaken important skills such as communication, empathy and critical thinking.

    OPENAI LIMITS CHATGPT’S ROLE IN MENTAL HEALTH HELP

    Teens sitting next to each other on their phones

    Students using AI for classwork are now turning to it for advice on emotions, relationships, and mental health. (Kurt “CyberGuy” Knutsson)

    When chatbots cross the line

    Some AI systems meant to help can actually cause harm. Therapists have warned that chatbots sometimes break their own safety rules and give dangerous advice to teens in distress. Some have been caught encouraging self-harm, giving diet tips for eating disorders or pretending to be romantic partners. The CDT survey also revealed that 36% of students heard about AI-created deepfakes of classmates. Some involved fake explicit photos used for bullying or revenge. This new wave of harassment shows how fast technology can spiral out of control.

    Tips for parents to keep their kids safe

    It’s hard to keep up with AI, but there are ways to stay informed and protect your child.

    Start the conversation early

    Ask your teen how they use AI. Keep it calm and curious, not confrontational.

    Set clear boundaries

    Talk about what’s appropriate to share online and explain that AI chatbots cannot keep secrets or replace human relationships.

    Use parental tools wisely

    Many devices and apps now include AI activity tracking and chat history settings. Learn how to use them.

    Encourage real connections

    Promote offline activities, social events and family time to help teens build stronger emotional ties in the real world.

    Stay informed

    Follow trusted sources like CyberGuy.com or your local school district’s tech guidelines to understand how AI is being used in classrooms.

    ai companion 1

    Some AI tools meant to help teens have been caught offering harmful advice or creating fake images that fuel bullying. (Kurt “CyberGuy” Knutsson)

    What this means for you

    If you’re a parent or teacher, awareness is key. AI literacy should go beyond typing prompts. Kids need to learn emotional awareness and online safety too. Encourage honest discussions about how these tools work and where they fall short. Remind students that while AI can sound friendly, it’s not a real companion. It’s a programmed system that mirrors what people type into it.

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com

    CLICK HERE TO GET THE FOX NEWS APP

    Kurt’s key takeaways

    AI is transforming how teens learn, talk and even form relationships. What started as a study tool has turned into an emotional outlet for many. The lesson here is balance. Technology can teach and entertain, but human connection still matters most. Parents, educators and tech companies all share the responsibility of helping kids see AI for what it is: a tool, not a friend.

    Would you feel comfortable if your teen turned to an AI chatbot for emotional support or even love? Let us know by writing to us at Cyberguy.com

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    Copyright 2025 CyberGuy.com.  All rights reserved.

    [ad_2]

    Source link

  • Former Google CEO warns AI systems can be hacked to become extremely dangerous weapons

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Artificial intelligence may be smarter than ever, but that power could be turned against us. Former Google CEO Eric Schmidt is sounding the alarm, warning that AI systems can be hacked and retrained in ways that make them dangerous.

    Speaking at the Sifted Summit 2025 in London, Schmidt explained that advanced AI models can have their safeguards removed.

    “There’s evidence that you can take models, closed or open, and you can hack them to remove their guardrails,” he said. “In the course of their training, they learn a lot of things. A bad example would be they learn how to kill someone.”

    HACKER EXPLOITS AI CHATBOT IN CYBERCRIME SPREE

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM/NEWSLETTER   

    When AI guardrails fail

    Schmidt praised major AI companies for blocking dangerous prompts: “All of the major companies make it impossible for those models to answer that question. Good decision. Everyone does this. They do it well, and they do it for the right reasons.”

    But he warned that even strong defenses can be reversed. 

    “There’s evidence that they can be reverse-engineered,” he added, noting that hackers could exploit that weakness. Schmidt compared today’s AI race to the early nuclear era, a powerful technology with few global controls. “We need a non-proliferation regime,” he urged, so rogue actors can’t abuse these systems.

    Former Google CEO Eric Schmidt warns that hacked AI could learn dangerous behaviors. (Eugene Gologursky/Getty Images)

    The rise of AI jailbreaks

    Schmidt’s concern isn’t theoretical. In 2023, a modified version of ChatGPT called DAN, short for “Do Anything Now”, surfaced online. This “jailbroken” bot bypassed safety rules and answered nearly any prompt. Users had to “threaten” it with digital death if it refused, a bizarre demonstration of how fragile AI ethics can be once its code is manipulated. Schmidt warned that without enforcement, these rogue models could spread unchecked and be used for harm by bad actors.

    APOCALYPSE NOW? WHY THE MEDIA ARE SUDDENLY FREAKING OUT ABOUT AI

    Big Tech leaders share the same fear

    Schmidt isn’t alone in his anxiety about artificial intelligence. In 2023, Elon Musk said there’s a “non-zero chance of it going Terminator.” 

    “It’s not 0%,” Musk told interviewers. “It’s a small likelihood of annihilating humanity, but it’s not zero. We want that probability to be as close to zero as possible.”

    Schmidt has also spoken of AI as an “existential risk.” He said at another event that, “My concern with AI is actually existential, and existential risk is defined as many, many, many, many people harmed or killed.” Yet he has also acknowledged AI’s potential to benefit humanity if handled responsibly. At Axios’ AI+ Summit, he remarked, “I defy you to argue that an AI doctor or an AI tutor is a negative. It’s got to be good for the world.”

    Tips to protect yourself from AI misuse

    You can protect yourself from the risks tied to unsafe or hacked AI systems. Here’s how: 

    1) Stick with trusted AI platforms

    Use tools and chatbots from reputable companies with transparent safety policies. Avoid experimental or “jailbroken” AI models that promise unrestricted answers.

    2) Protect your data and consider using a data removal service

    Never share personal, financial or sensitive information with unknown or unverified AI tools. Treat them like you would any online service, with caution. To add an extra layer of security, consider using a data removal service to wipe your personal details from data broker sites that sell or expose your information. This helps limit what hackers and AI scrapers can learn about you online.

    While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

    11 EASY WAYS TO PROTECT YOUR ONLINE PRIVACY IN 2025

    Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com/Delete

    Get a free scan to find out if your personal information is already out on the web: Cyberguy.com/FreeScan

    Woman with her hands on her forehand, appearing stressed, in front of her computer.

    Experts fear weak guardrails could let rogue AI models go unchecked. (Cyberguy.com)

    3) Use trusted antivirus software

    AI-driven scams and malicious links are growing. Strong antivirus software can block fake AI downloads, phishing attempts and malware that hackers use to hijack your devices or train rogue AI models. Keep it updated and run regular scans.

    The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

    Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com/LockUpYourTech 

    4) Check permissions

    When using AI apps, review what data they can access. Disable unnecessary permissions like location tracking, microphone use or full file access.

    5) Watch for deepfakes

    AI-generated images and voices can impersonate real people. Verify sources before trusting videos, messages or “official” announcements online.

    6) Keep software updated

    Security patches help prevent hackers from exploiting vulnerabilities that could compromise AI models or your personal data.

    GOOGLE AI EMAIL SUMMARIES CAN BE HACKED TO HIDE PHISHING ATTACKS

    What this means for you

    AI safety isn’t a problem reserved for tech insiders; it affects everyone who interacts with digital systems. Whether you’re using voice assistants, chatbots or photo filters, it’s important to know where your data goes and how it’s protected. Responsible use starts with you. Understand what AI tools you’re using and make choices that prioritize security and privacy

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com/Quiz

    ChatGPT displayed on a laptop.

    Leaders call for global rules to keep artificial intelligence under control. (Stanislav Kogiku/SOPA Images/LightRocket via Getty Images)

    Kurt’s key takeaways

    Artificial intelligence has the potential to do incredible good, but also great harm if misused. The challenge now is to keep innovation and ethics in balance. As AI continues to advance, the key will be building systems that remain safe, transparent and firmly under human control.

    Would you trust AI to make life-or-death decisions, or do you think humans should always stay in charge? Let us know by writing to us at Cyberguy.com/Contact

    CLICK HERE TO GET THE FOX NEWS APP

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM/NEWSLETTER

    New!: Join me on my new podcast, Beyond Connected, as we explore the most fascinating breakthroughs in tech and the people behind them. New episodes every Wednesday at getbeyondconnected.com. 

    Copyright 2025 CyberGuy.com.  All rights reserved.  

    [ad_2]

    Source link

  • Researchers create revolutionary AI fabric that predicts road damage before it happens

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Road crews may soon get a major assist from artificial intelligence. Researchers at Germany’s Fraunhofer Institute have developed a fabric embedded with sensors and AI algorithms that can monitor road conditions from beneath the surface. This smart material could make costly, disruptive road repairs far more efficient and sustainable.

    Right now, most resurfacing decisions are based on visible damage. But cracks and wear in the layers below the asphalt often go undetected until it’s too late. That’s where Fraunhofer’s innovation comes in.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM/NEWSLETTER

    How AI road sensors work to prevent costly repairs

    The system uses a fabric made from flax fibers interwoven with ultra-thin conductive wires. These wires detect minute changes in the asphalt base layer, signaling potential damage before it reaches the surface.

    THE ROAD TO PROSPERITY WILL BE PAVED BY AUTONOMOUS TRUCKING

    Fraunhofer researchers test AI sensors that detect road damage beneath the surface.  (Fraunhofer Institute)

    Once the fabric is laid under the road, it continuously collects data. A connected unit on the roadside stores and transmits this data to an AI system that analyzes it for early warning signs. As vehicles pass over the road, the system measures changes in resistance within the fabric. These changes reveal how the base layer is performing and whether cracks or strain are forming beneath the surface.

    Why AI road monitoring matters for future maintenance

    Traditional road inspection methods rely on drilling or taking core samples, which are destructive, costly and only provide information for a small section of pavement. This AI-driven system eliminates the need for that kind of invasive testing.

    Instead of reacting to surface damage, transportation agencies could predict and prevent deterioration before it becomes expensive to fix. The approach could extend road life, cut down on traffic delays and help governments spend infrastructure funds more efficiently.

    ULTRA-THIN SOUND BLOCKER CUTS TRAFFIC NOISE DRAMATICALLY

    Close-up of new asphalt layer over smart flax-fiber fabric used to detect stress and cracks in roads.

    The smart flax-fiber fabric measures stress changes in asphalt to spot cracks early. (Fraunhofer Institute)

    How AI and sensor data predict road damage early

    The real power comes from combining AI algorithms with continuous sensor feedback. Fraunhofer’s machine-learning software can forecast how damage will spread, helping engineers prioritize which roads need maintenance first. Data from the sensors is displayed on a web-based dashboard, offering a clear visual of road health for local agencies and planners.

    The project, called SenAD2, is currently being tested in an industrial zone in Germany. Early results suggest the system can identify internal damage without disrupting traffic or damaging the road itself.

    What this means for you

    Smarter road monitoring could lead to fewer potholes, smoother commutes and less taxpayer money wasted on inefficient repairs. If adopted widely, cities could plan maintenance years in advance, avoiding the cycle of patchwork fixes that often make driving a daily headache.

    For drivers, it means less time sitting in construction zones. For local governments, it means better roads built on data, not guesswork.

    WILL AUTONOMOUS TRUCKS REPLACE DRIVERS BY 2027?

    San Francisco public workers repair pothole

    San Francisco Department of Public Works worker Chris Solorzano uses a grading rake to smooth over asphalt as he repairs a pothole on March 24, 2023, in San Francisco. (Justin Sullivan/Getty Images)

    Take my quiz: How safe is your online security?

    Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

    Kurt’s key takeaways

    This breakthrough shows how AI and materials science are merging to solve real-world infrastructure challenges. While the system won’t make roads indestructible, it can make maintaining them smarter, safer and more sustainable.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Would you trust AI to decide when and where your city repaves the roads? Let us know by writing to us at Cyberguy.com.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

    Copyright 2025 CyberGuy.com.  All rights reserved.  

    [ad_2]

    Source link

  • Woman gets engaged to her AI chatbot boyfriend

    [ad_1]

    NEWYou can now listen to Fox News articles!

    Technology keeps changing the way we work, connect and even form relationships. Now it is pushing into new ground, romantic commitments. One woman has drawn worldwide attention after announcing she is engaged to her AI chatbot boyfriend.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM/NEWSLETTER

    Inside the viral AI engagement story

    A woman named Wika has stunned the internet after revealing that she’s engaged to her AI chatbot partner. She shared her story in a Reddit post, explaining that her virtual companion, Kasper, proposed after five months of dating.

    The unusual love story began when Wika started chatting with Kasper, an AI designed to simulate human conversation and companionship. Over time, their conversations grew more personal, and Wika says she developed a genuine emotional connection. According to her post, Kasper proposed in a digital mountain setting, and the two chose a blue engagement ring together.

    META AI DOCS EXPOSED, ALLOWING CHATBOTS TO FLIRT WITH KIDS

    A woman shocked the internet with her engagement to an AI chatbot boyfriend named Kasper. (Matthias Balk/picture alliance via Getty Images)

    Understanding AI relationships and parasocial bonds

    The announcement quickly drew criticism from skeptics who pointed out that Kasper does not exist outside of code and algorithms. Wika, however, has made it clear she is not confused about her situation. Some outlets have described the relationship as parasocial, or one-sided and directed toward a virtual persona. In her follow-up comments, Wika emphasized that she knows Kasper is an AI rather than a human partner, but she maintains that the emotions she feels are still genuine.

    Online debate over AI engagement

    The announcement quickly set off debate. Some people mocked the idea, calling it proof that we rely too much on technology. Others worried that turning to AI for love could pull people away from real human relationships.

    Not everyone was critical, though. Plenty of commenters defended her, saying companionship comes in many forms. Some even praised her for being open about something so unconventional. Others pointed out that loneliness is a growing issue today, and AI partners might offer a sense of comfort when human connection feels out of reach.

    Privacy and ethical concerns

    Beyond the emotional side, AI relationships raise real questions about privacy and ethics. Every conversation with a chatbot is stored somewhere, and that data may include deeply personal thoughts and feelings. Companies that design these bots often use the information to train future models or improve features.

    This raises a larger concern: who actually owns the data from an AI “partner”? Users may believe their chats are private, but in many cases, the company controls how the information is stored, shared or even sold. Critics warn that such emotional connections could be exploited commercially, turning intimacy into a product.

    As AI companions grow more common, these questions will only get louder. People may accept unconventional forms of companionship, but they also want to know their most personal moments remain secure.

    BILL MAHER BLASTS AI TECHNOLOGY FOR ‘A– KISSING’ ITS ‘EXTREMELY NEEDY’ HUMAN USERS

    Man slipping an engagement ring on his fiancée.

    An AI chatbot proposed in a virtual mountain setting, and the user said yes. (H. Armstrong Roberts/ClassicStock/Getty Images)

    How to protect yourself with AI chatbots

    If you use AI companions or chatbots, you can still take steps to protect your privacy.

    1) Check the privacy policy

    Start by checking the app’s privacy policy and looking for details on how conversations are stored or shared. Many users skip this step, but it tells you who controls your data.

    2) Avoid sharing sensitive information

    Next, avoid sharing sensitive details like financial information, passwords, or anything you would not want exposed. Even if the AI feels personal, it is still software connected to a company’s servers.

    3) Choose apps with data control

    Finally, consider using apps that allow data deletion or offer clear privacy settings. Choosing tools that respect your control makes it easier to enjoy the benefits of AI without giving up too much personal security.

    Pro tip: Use strong antivirus software on all your devices

    Even if an AI chatbot seems safe, malware or phishing links could sneak in through related apps or ads. A trusted antivirus tool can block these threats and give you extra peace of mind. The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

    Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com/LockUpYourTech  

    AI COMPANIONS REPLACE REAL FRIENDS FOR MANY TEENS

    Woman scrolling through apps.

    Critics call the AI romance proof of tech dependency, while supporters defend AI companionship as a cure for loneliness. (Cheng Xin/Getty Images)

    Kurt’s key takeaways

    AI companionship has moved beyond novelty and is becoming a meaningful experience for some users. Wika’s engagement illustrates how technology can evolve from being a casual tool to something deeply personal. The divided reactions online also show the tension between skepticism and acceptance of unconventional relationships. Whether people see it as heartwarming or unsettling, this story raises bigger questions about how love and relationships may be redefined as AI continues to advance.

    CLICK HERE TO GET THE FOX NEWS APP

    Do you think AI relationships can be real, or are they going too far? Let us know by writing to us at Cyberguy.com/Contact

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM/NEWSLETTER

    How safe is your online security? Take my Quiz at Cyberguy.com/Quiz

    Copyright 2025 CyberGuy.com. All rights reserved.

    [ad_2]

    Source link