ReportWire

Tag: chatgpt

  • How to Read Books Faster Using ChatGPT (3 Ways)

    [ad_1]

    • AI has enabled us to do a lot more, and reading books with AI might give you the impression that I am talking about summarising a book in a few paragraphs.
    • However, I have a secret method that not only helps me read books faster but also understand a lot of complex ideas and analogies that I would have missed on my own.
    • There is a lot you can achieve with AI, so gear up, get your best book out, and let us learn what the best ways are that we can use AI in reading books.

    There are two kinds of people in the world: those who read books and those who don’t. For me, books are tiring, and I can never discipline myself to complete an entire book on my own. However, I have a secret method that not only helps me read books faster but also understand a lot of complex ideas and analogies that I would have missed on my own. So this article is for every bibliophile and wanna be book-reading person, because I will tell you how you can give your reading capabilities a boost with AI.

    Using AI to Review Books Before Reading

    AI has enabled us to do a lot more, and reading books with AI might give you the impression that I am talking about summarising a book in a few paragraphs. This is not the case, I assure you. When I talk about reading with AI’s help, I mean understanding the idea behind the book. Simplifying that head-scratching chapter that left you confused, or even connecting different books of the same series or author. There is a lot you can achieve with AI, so gear up, get your best book out, and let us learn what the best ways are that we can use AI in reading books.

    1. Simplification of Complex Ideologies

    Some of the books that we take on are complex and not that easy to understand. It is tough to find the meaning of a certain phrase or even a chapter online, especially in a very specific book. However, with ChatGPT, you can very easily find out. All you have to do is follow the steps mentioned below.

    1. Open ChatGPT and upload the relevant images.

    2. After you have uploaded the images, prompt ChatGPT to explain them in a simpler manner.

    Explain in simple manner

    3. You can even ask ChatGPT to give you bullet points for better understanding.

    Explained in bullet points

    4. If this is not helpful, you can access the Advanced voice mode to have a further conversation in which you can discuss your doubts.

    2. Using ChatGPT to Confirm Analogies

    You can also use ChatGPT to double-check your own analogies. This means you can see if you misinterpreted anything or made any errors. You can type in your observations or analogies and get instant feedback on the same. This is a great way to get to understand the depth of the book you are reading. This will help you gain confidence and will encourage you to dig deeper into your next read.

    1. Open ChatGPT and type in your analogies or theories.

    2. Then prompt ChatGPT to check and provide feedback on the given analogies in reference to the book you are reading.

    Testing analogies

    3. The response will give you clarification and will also correct any mistakes that you might have made.

    ChatGPT answer 2

    3. Summarize Lenghty Books

    The best part when you have ChatGPT as your reading buddy is that you can skip boring chapters with zero judgment. Another bonus is that you can simply ask for a quick summary, and it will summarise the entire thing, which means you will not miss any relevant information from that particular chapter. You can either upload the book in PDF format, or directly paste the text online, or even use pictures. Prompt it to summarise, and it will give you a summary of the desired chapter.

    Summary of Book

    FAQs

    Q. What is ChatGPT Go?

    ChatGPT Go is an India-exclusive subscription plan that costs INR 399 for a month. In this subscription, you get some popular features, such as enhanced access to GPT-5, image generation, and file uploads, more than the free version.

    Q. How many images can I generate in the free plan of ChatGPT?

    You can generate up to three images in a day with the free plan of ChatGPT. If you want to do more than that, you can pay for a subscription plan.

    Wrapping Up

    This article discusses how ChatGPT is a great reading partner and how you can use it to expedite the process. You can fact-check, summarise, and even test out your analogies with it. Not only can you use it to quiz your friends from the book club. So do check it out.

    You may also like to read:

    Have any questions related to our how-to guides, or anything in the world of technology? Check out our new GadgetsToUse AI Chatbot for free, powered by ChatGPT.

    You can also follow us for instant tech news at Google News or for tips and tricks, smartphones & gadgets reviews, join the GadgetsToUse Telegram Group, or subscribe to the GadgetsToUse Youtube Channel for the latest review videos.

    Was this article helpful?

    YesNo

    [ad_2]

    Dev Chaudhary

    Source link

  • You can now buy things through ChatGPT with new

    [ad_1]

    ChatGPT wants to be your personal online shopper. 

    A new “Instant Checkout” feature lets users make purchases on a product the AI-powered bot brings up in a chat, without having to navigate outside of the app, ChatGPT creator OpenAI said in a statement Monday. 

    For example, if you query ChatGPT for the “best mattress under $1,000,” or “gift for an avid reader,” it will suggest what it believes to be the most relevant products from across the internet. If a consumer wants to purchase one of ChatGPT’s recommendations, they can now do so within the chat, so long Instant Checkout supports the product. 

    Currently, ChatGPT users can buy directly from U.S. Etsy sellers from within a chat. Through a partnership with Shopify, ChatGPT will soon give consumers access to more than 1 million vendors, such as cosmetic company Glossier, shapewear company SKIMS, shoemaker Steve Madden and more. 

    The new tool marks ChatGPT’s foray into so-called agentic commerce, with the app acting as the shopper’s agent. In other words, ChatGPT interacts with both the buyer and the seller, while the merchant processes payment and fulfills the order. Merchants pay ChatGPT a small fee on completed transactions, Open AI said. 

    “This marks the next step in agentic commerce, where ChatGPT doesn’t just help you find what to buy, it also helps you buy it. For shoppers, it’s seamless: go from chat to checkout in just a few taps. For sellers, it’s a new way to reach hundreds of millions of people while keeping full control of their payments, systems, and customer relationships,” OpenAI said in a statement Monday.

    For now, the technology, which the company codeveloped with payment processor Stripe, only supports single-item purchases, OpenAI said. 

    Discovering products in AI conversations

    Shopify on Monday said it has long aimed to allow merchants to sell to customers “anywhere AI conversations happen,” as more Americans rely on generative AI tools like ChatGPT to help them make decisions. 

    “Shopping is changing fast. People are discovering products in AI conversations, not just through search or ads,” Vanessa Lee, VP of product at Shopify, said in an article on the company’s website. “This will let our merchants show up naturally in those moments and give shoppers a way to buy without breaking their flow. It’s a really exciting shift for commerce.”

    Shopify said it wants to position its merchants at the forefront of a sea change in how online commerce is conducted. 

    “We’re making sure our merchants thrive in the era of agentic commerce,” said Lee. “We’re helping everyone from indie brands to household names reach shoppers in entirely new ways.”

    E-commerce giant Amazon is also wading into the world of agentic AI. Through its “Buy for Me” feature in the Amazon Shopping App, shoppers can purchase goods from vendors who don’t sell their products on Amazon.com without leaving the Amazon ecosystem. 

    “If a customer decides to proceed with a Buy for Me purchase, they tap on the Buy for Me button on the product detail page to request Amazon make the purchase from the brand retailer’s website on their behalf,” Amazon explains on its corporate website. “Customers are taken to an Amazon checkout page where they confirm order details, including preferred delivery address, applicable taxes and shipping fees, and payment method.”

    [ad_2]

    Source link

  • ChatGPT introduces new parental controls amid concerns over teen safety

    [ad_1]

    OpenAI, the company that developed ChatGPT, announced new parental controls on Monday aimed at helping protect young people who interact with its generative artificial intelligence program.  

    All ChatGPT users will have access to the control features from Monday onward, the company said.

    The announcement comes as OpenAI, which technically allows users as young as 13 to sign up, contends with mounting public pressure to prioritize the safety of ChatGPT for teenagers. (OpenAI says on its website that it requires users ages 13 to 18 to obtain parental consent before using ChatGPT.)

    In August, the California-based technology company pledged to implement changes to its flagship product after facing a wrongful death lawsuit by parents of a 16-year-old who alleged the chatbot led their son to take his own life. 

    OpenAI’s new controls will allow parents to link their own ChatGPT accounts to the accounts of their teenagers “and customize settings for a safe, age-appropriate experience,” OpenAI said in Monday’s announcement. Certain types of content are then automatically restricted on a teenager’s linked account, including graphic content, viral challenges, “sexual, romantic or violent” role-play, and “extreme beauty ideals,” according to the company.

    Along with content moderation, parents can opt to receive a notification from OpenAI should their child exhibit potential signs of harming themselves while interacting with ChatGPT.

    “If our systems detect potential harm, a small team of specially trained people reviews the situation,” the company said. “If there are signs of acute distress, we will contact parents by email, text message and push alert on their phone, unless they have opted out.”

    The company also said it is “working on the right process and circumstances in which to reach law enforcement or other emergency services” in emergencies where a teen may be in imminent danger and a parent cannot be reached.

    “We know some teens turn to ChatGPT during hard moments, so we’ve built a new notification system to help parents know if something may be seriously wrong,” OpenAI said.

    OpenAI has introduced other measures recently aimed at helping safeguard younger ChatGPT users. The company said earlier this month that chatbot users identified as being under 18 will automatically be directed to a version that is governed by “age-appropriate” content rules. 

    “The way ChatGPT responds to a 15-year-old should look different than the way it responds to an adult,” the company said at the time. 

    It noted on Monday, however, that while guardrails help, “they’re not foolproof and can be bypassed if someone is intentionally trying to get around them.”

    People can use ChatGPT without creating an account, and parental controls and automatic content limits only work if users are signed in.

    “We will continue to thoughtfully iterate and improve over time,” the company said. “We recommend parents talk with their teens about healthy AI use and what that looks like for their family.”

    The Federal Trade Commission has started an inquiry into several social media and artificial intelligence companies, including OpenAI, about the potential harms to teens and children who use their chatbots as companions. 

    [ad_2]

    Source link

  • OpenAI Rolls Out ChatGPT’s Ability to Buy Stuff for You

    [ad_1]

    OpenAI just made it possible to buy things directly from ChatGPT.

    Starting today, all ChatGPT users in the U.S. can use a new feature called Instant Checkout to purchase items from Etsy sellers without leaving the chat. OpenAI says more than a million Shopify merchants, including Glossier, SKIMS, and Spanx, are coming soon.

    For now, Instant Checkout only supports single-item purchases, but OpenAI plans to add multi-item carts and expand to more merchants and regions.

    The company also announced it’s open-sourcing the technology that powers Instant Checkout, the Agentic Commerce Protocol. Developed with payment processor Stripe, the protocol is meant to serve as a standard for AI-driven shopping and to make it easier for developers to integrate their stores with ChatGPT.

    This move puts OpenAI one step closer to its bigger goal of creating a fully functional AI agent. The industry as a whole is racing to launch so-called AI agents, virtual assistants that can theoretically handle tasks like writing reports, booking travel, shopping online, and scheduling appointments.

    Just last week, OpenAI rolled out ChatGPT Pulse, which conducts relevant research for users and connects to their email, calendars, and other apps to deliver a daily morning briefing. Another feature introduced this year, ChatGPT Agent, also links to users’ apps but still needs explicit prompts to carry out tasks.

    And in January, the company unveiled OpenAI Operator, a tool that can fill out online forms and place orders on its own—though shoppers still have to manually enter payment info at checkout.

    But one thing is becoming clear as the age of AI agents approaches: they’ll need access to a lot of our personal data to work properly, if they work at all.

    How Instant Checkout works

    A lot of ChatGPT users already turn to the chatbot for online shopping recommendations.

    Now, when a user asks something like “gift ideas for a housewarming” or “best running shoes under $100,” products that support Instant Checkout will display a “Buy” option. Users who tap on “Buy” will then confirm their order, shipping, and payment details directly in chat. Those with a ChatGPT subscription can pay with the card already on file or choose another payment method.

    The seller then handles the order, shipping, and fulfillment like they normally would. ChatGPT just acts as a middleman, providing the seller with the buyer’s information.

    The service is free for users, but sellers will have to pay a small fee on completed purchases. OpenAI also says that items supporting Instant Checkout won’t be given preference in product results and won’t impact its recommendations overall.

    However, when ranking sellers of the same product, “whether Instant Checkout is enabled” will be considered to “optimize the user experience.”

    [ad_2]

    Bruce Gil

    Source link

  • OpenAI takes on Google, Amazon with new agentic shopping system | TechCrunch

    [ad_1]

    ChatGPT users in the U.S. can now make Etsy and Shopify purchases within conversations, marking a next step towards the future of online shopping – both for consumers and the platforms that control product discovery, recommendation, and payments. In other words, OpenAI might be on the path to reshaping who holds power in e-commerce. 

    OpenAI’s new “Instant Checkout” feature is available to ChatGPT Pro, Plus, and Free logged-in users buying from U.S.-based Etsy sellers, with more than 1 million Shopify merchants like Glossier, Skims, Spanx, and Vuori “coming soon,” per OpenAI.  

    Instant Checkout builds on previous shopping features on ChatGPT that surfaced relevant products, images, reviews, prices, and direct links to merchants in response to shopping questions like “what should I get my friend who loves ceramics?” or “best sneakers to wear to the office.” Now, instead of having to leave the conversation, users can just tap “Buy” to confirm their order, shipping, and payment details (options include Apple Pay, Google Pay, Stripe, or credit card) to complete the purchase.  

    Last year, Perplexity introduced a similar in-chat shopping and payments feature. Microsoft also offers merchants the ability to create in-chat storefront capabilities with the Copilot Merchant Program. 

    This type of frictionless experience has the potential to spark a new movement in how people shop online – one that moves away from search engines like Google and e-commerce platforms like Amazon towards conversational agents with curated recommendations, comparisons, and easy checkout experiences. 

    It’s also setting the stage for new power brokers to emerge in e-commerce. Google and Amazon have long been the gatekeepers for retail discovery. If more purchases start inside AI chatbots, the firms behind them will suddenly have more control over what products are surfaced and what commissions or fees they charge.  

    Both Amazon and Google have previously leveraged their dominance to favor their own products or preferred partners, pushing down competitors in search results or charging steep fees to sellers simply to maintain visibility. OpenAI said in a blog post that the product results it surfaces are “organic and unsponsored, ranked purely on relevance to the user,” and that it will charge merchants a “small fee” for completed purchases.  

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    TechCrunch has reached out to OpenAI for more information. 

    Along with OpenAI’s introduction of in-chat checkout, the AI firm also noted that it will open-source its Agentic Commerce Protocol (ACP), the tech that powers Instant Checkout built with Stripe, so that other merchants and developers can integrate agentic checkout. 

    “Stripe is building the economic infrastructure for AI,” Will Gaybrick, president of technology and business at Stripe, said in a statement. “That means re-architecting today’s commerce systems and creating new AI-powered experiences for billions of people.” 

    While some may balk at handing ChatGPT private payment information, the company says orders, payments, and fulfillment are handled by the merchant using their existing systems. ChatGPT merely acts as an agent, an intermediary that can securely pass along information between user and merchant.  

    Open-sourcing ACP makes it easier for merchants to integrate with ChatGPT, widening the adoption of AI chatbots that function as a virtual storefront. It also expands OpenAI’s potential control as a gatekeeper for retail discovery and checkout, and could position the firm to be the de facto architect of the AI commerce ecosystem.  

    That would put it in tension with Google yet again, as the tech giant has recently launched its own open protocol for purchases initiated by AI agents, dubbed Agent Payments Protocol (AP2).

    [ad_2]

    Rebecca Bellan

    Source link

  • How I use ChatGPT to Select the Best and Ripe Fruits

    [ad_1]

    • It also saved me the embarrassment when I would take a fruit home, and it would turn out to be raw or not perfect enough for eating.
    • This also works on desktop, but you will get the best experience on a mobile as it’s easier to click and upload images on the go.
    • It can also suggest to you which fruits you should eat under a specific diet plan.

    While many people now order fruits from quick commerce platforms like BlinkIt, Instamart, and Zepto, I still prefer buying them from the market. However, choosing the right fruit from a basket can be overwhelming. You need to inspect each fruit visually to see if it’s ripe enough to be consumed, check if it’s spoiled, whether it’s juicy or mushy, and other similar aspects.

    To overcome this confusion, I recently started using ChatGPT to analyse fruits, which has helped me to pick the perfect fruit each time. It also saved me the embarrassment when I would take a fruit home, and it would turn out to be raw or not perfect enough for eating. Here’s how you can do the same.

    ChatGPT for Fruit Suggestions

    Ever since OpenAI introduced custom GPTs, several extension tools have emerged online that work within ChatGPT with an expertise in a certain field. One such tool is the Fruit Selector AI, which can analyse all types of fruits from a single image. It provides insights into which fruit you should select based on ripeness, shape, colour, etc. It can also suggest to you which fruits you should eat under a specific diet plan. Here’s how you can use it.

    1. Open the ChatGPT app on your smartphone. This also works on desktop, but you will get the best experience on a mobile as it’s easier to click and upload images on the go.

    2. Swipe right, and go to GPTs.

    3. Search for Fruit Selector AI.

    4. Here you can ask the tool which fruit you should eat for your dietary needs or fitness plans. If you have already decided, then you can directly upload the image and ask ChatGPT to make the choice.

    5. When you provide a raw image, ChatGPT will give you an overview of what fruits you should pick.

    6. For more precise answers, you can edit the image and number each fruit manually, and then provide the image to ChatGPT.

    7. The tool will precisely tell you which exact unit of fruit you should pick, based on your numbering.

    Using ChatGPT for Diet Suggestions

    While fruits are good for health, you need to be very specific about which fruits are best for you based on your dietary preferences. For example, people looking for weight loss should look for high-fibre fruits such as guava, blueberries, avocado, etc. Whereas those looking to gain weight for muscle gain should opt for high energy fruits like bananas, berries, apples, etc.

    Here are some prompts that you can utilise to build a perfect diet plan using ChatGPT and Fruit Selector:

    • Suggest me some fruits for controlled weight loss over 3 months.
    • Which is the best budget-friendly high-protein fruit?
    • Are bananas good for recovery after gym? Suggest some other alternatives for the same.
    • Build a customized diet plan using fruits and veggies. My body weight is 80kg, I lift heavy weights, and my height is 180cm.
    • Which fruits have the lowest glycemic index?

    FAQs

    Q. Can I use ChatGPT to make a diet plan?

    Yes, ChatGPT can generate a detailed diet plan. You should provide details like your current weight, height, exercise routine, preferred workouts, and final goal, using which it will provide a proper dietary plan.

    Q. How to use ChatGPT as a gym trainer?

    You can use custom GPTs like GymStreak or Hevy as your personal gym trainer. These built-in bots in ChatGPT suggest you proper workouts, and you can also upload your workouts and ask them to find flaws in your technique.

    Q. Can ChatGPT count calories?

    ChatGPT can provide you an estimate of the calories present in the food that you are consuming. You can either describe your food using text, or directly upload an image of your plate, after which ChatGPT will review all the contents in your food and provide a detailed calorie breakdown.

    Wrapping Up

    ChatGPT can be used in endless creative ways, and using it to hunt for the best fruits while in the market is just one of them. Not only does it help you in picking the best fruits, you can also use the tool to find the best ones for your personal fitness or dietary needs. However, you should always consider professional help for picking a diet plan, as sometimes ChatGPT is not perfect.

    You may also like to read:

    Have any questions related to our how-to guides, or anything in the world of technology? Check out our new GadgetsToUse AI Chatbot for free, powered by ChatGPT.

    You can also follow us for instant tech news at Google News or for tips and tricks, smartphones & gadgets reviews, join the GadgetsToUse Telegram Group, or subscribe to the GadgetsToUse Youtube Channel for the latest review videos.

    Was this article helpful?

    YesNo

    [ad_2]

    Chinmay Dhumal

    Source link

  • ChatGPT introduces new parental controls for teens

    [ad_1]



    ChatGPT introduces new parental controls for teens – CBS News










































    Watch CBS News



    Parents can now connect their ChatGPT accounts to their children’s and get notifications when sensitive issues are raised. Jo Ling Kent has more from Los Angeles.

    [ad_2]
    Source link

  • Here’s Who Can See Your Chat History When You Talk to Each AI

    [ad_1]

    While AI tools like ChatGPT and Google Gemini can be helpful, they’re also potential privacy minefields.

    Most AI assistants save a complete record of your conversations, making them easily visible to anyone with access to your devices. Those conversations are also stored online, often indefinitely, so they could be exposed due to bugs or security breaches. In some cases, AI providers can even send your chats along to human reviewers.

    All of this should give you pause, especially if you plan to share your innermost thoughts with AI tools or use them to process personal information. To better protect your privacy, consider making some tweaks to your settings, using private conversation modes, or even turning to AI assistants that protect your privacy by default.

    [Screengrab: ChatGPT]

    To help make sense of the options, I looked through all the privacy settings and policies of every major AI assistant. Here’s what you need to know about what they do with your data, and what you can do about it:

    ChatGPT

    By default: ChatGPT uses your data to train AI, and warns that its “training data may incidentally include personal information.”
    Can humans review your chats? OpenAI’s ChatGPT FAQ says it may “review conversations” to improve its systems. The company also says it now scans conversations for threats of imminent physical harm, submitting them to human reviewers and possibly reporting them to law enforcement.
    Can you disable AI training? Yes. Go to Settings > Data controls > Improve the model for everyone.
    Is there a private chat mode? Yes. Click “Turn on temporary chat” in the top-right corner to keep a chat out of your history and avoid having it used to train AI.
    Can you share chats with others? Yes, by generating a shareable link. (OpenAI launched, then removed, a feature that let search engines index shared chats.)
    Are your chats used for targeted ads? OpenAI’s privacy policy says it does not sell or share personal data for contextual behavioral advertising, doesn’t process data for targeted ads, and doesn’t process sensitive personal data to infer characteristics about consumers.
    How long does it keep your data? Up to 30 days for temporary and deleted chats, though even some of those may be kept longer for “security and legal obligations.” All other data is stored indefinitely.

    Google Gemini

    By default: Gemini uses your data to train AI.
    Can humans review your chats? Yes. Google says not to enter “any data you wouldn’t want a reviewer to see.” Once a reviewer sees your data, Google keeps it for up to three years—even if you delete your chat history.
    Can you disable AI training? Yes. Go to myactivity.google.com/product/gemini, click the “Turn off” drop-down menu, then select either “Turn off” or “Turn off and delete activity.”
    Is there a private chat mode? Yes. In the left sidebar, hit the chat bubble with dashed lines next to the “New chat” button. (Alternatively, disabling Gemini Apps Activity will hide your chat history from the sidebar, but re-enabling it without deleting past data will bring your history back.)
    Can you share chats with others? Yes, by generating a shareable link.
    Are your chats used for targeted ads? Google says it doesn’t use Gemini chats to show you ads, but the company’s privacy policy allows for it. Google says it will communicate any changes it makes to this policy.
    How long does it keep your data? Indefinitely, unless you turn on auto-deletion in Gemini Apps Activity.

    Anthropic Claude

    By default: From September 28 onward, Anthropic will use conversations to train AI unless you opt out.
    Can humans review your chats? No, though Anthropic reviews conversations flagged as violating its usage policies.
    Can you disable AI training? Yes, Head to Settings > Privacy and disable “Help improve Claude.”
    Is there a private chat mode? No. You must delete past conversations manually to hide them from your history.
    Can you share chats with others? Yes, by generating a shareable link.
    Are your chats used for targeted ads? Anthropic doesn’t use conversations for targeted ads.
    How long does it keep your data? Up to two years, or seven years for prompts flagged for trust and safety violations.

    Microsoft Copilot

    By default: Microsoft uses your data to train AI.
    Can humans review your chats? Yes. Microsoft’s privacy policy says it uses “both automated and manual (human) methods of processing” personal data.
    Can you disable AI training? Yes, though the option is buried. Click your profile image > your name > Privacy and disable “Model training on text.”
    Is there a private chat mode? No. You must delete chats one by one or clear your history from Microsoft’s account page.
    Can you share chats with others? Yes, by generating a shareable link. Note that shared links can’t be unshared without deleting the chat.
    Are your chats used for targeted ads? Microsoft uses your data for targeted ads and has discussed integrating ads with AI. You can disable this by clicking your profile image > your name > Privacy and disabling “Personalization and memory.” A separate link disables all personalized ads for your Microsoft account.
    How long does it keep your data? Data is stored for 18 months, unless you delete it manually.

    xAI Grok

    By default: Uses your data to train AI.
    Can humans review your chats? Yes. Grok’s FAQ says a “limited number” of “authorized personnel” may review conversations for quality or safety.
    Can you disable AI training? Yes. Click your profile image and go to Settings > Data Controls, then disable “Improve the Model.”
    Is there a private chat mode? Click the “Private” button at the top right to keep a chat out of your history and avoid having it used to train AI.
    Can you share chats with others? Yes, by generating a shareable link. Note that shared links can’t be unshared without deleting the chat.
    Are your chats used for targeted ads? Grok’s privacy policy says it does not sell or share information for targeted ad purposes.
    How long does it keep your data? Private Chats and even deleted conversations are stored for 30 days. All other data is stored indefinitely.

    By default: Uses your data to train AI.
    Can humans review your chats? Yes. Meta’s privacy policy says it uses manual review to “understand and enable creation” of AI content.
    Can you disable AI training? Not directly. U.S. users can fill out this form. Users in the EU and U.K. can exercise their right to object.
    Is there a private chat mode? No.
    Can you share chats with others? Yes. Shared links automatically appear in a public feed and can show up in other Meta apps as well.
    Are your chats used for targeted ads? Meta’s privacy policy says it targets ads based on the information it collects, including interactions with AI.
    How long does it keep your data? Indefinitely.

    Perplexity

    By default: Uses your data to train AI.
    Can humans review your chats? Perplexity’s 
    privacy policy does not mention human review.
    Can you disable AI training? Yes. Go to Account > Preferences and disable “AI data retention.”
    Is there a private chat mode? Yes. Click your profile icon, then select “Incognito” under your account name.
    Can you share chats with others? Yes, by generating a shareable link.
    Are your chats used for targeted ads? Yes. Perplexity says it may share your information with third-party advertising partners and may collect from other sources (for instance, data brokers) to improve its ad targeting.
    How long does it keep your data? Until you delete your account.

    Duck.AI

    By default: Duck.AI doesn’t use your data to train AI, thanks to deals with major providers.
    Can humans review your chats? No.
    Can you disable AI training? Not applicable.
    Is there a private chat mode? No. You must delete previous chats individually or all at once through the sidebar.
    Can you share chats with others? No.
    Are your chats used for targeted ads? No.
    How long does it keep your data? Model providers keep anonymized data for up to 30 days, unless needed for legal or safety reasons.

    Proton Lumo

    By default: Proton Lumo doesn’t use your data to train AI.
    Can humans review your chats? No.
    Can you disable AI training? Not applicable.
    Is there a private chat mode? Yes. Click the glasses icon at the top right.
    Can you share chats with others? No.
    Are your chats used for targeted ads? No.
    How long does it keep your data? Proton does not store logs of your chats.

    By Jared Newman

    This article originally appeared in Inc.’s sister publication, Fast Company.

    Fast Company is the world’s leading business media brand, with an editorial focus on innovation in technology, leadership, world changing ideas, creativity, and design. Written for and about the most progressive business leaders, Fast Company inspires readers to think expansively, lead with purpose, embrace change, and shape the future of business.

    [ad_2]

    Fast Company

    Source link

  • Grandmother donates ChatGPT-picked Powerball jackpot to Navy relief, dementia research

    [ad_1]

    NEWYou can now listen to Fox News articles!

    A Virginia grandmother who used ChatGPT to help pick her Powerball numbers struck big — and then donated it all to charity.

    Carrie Edwards, of Midlothian, matched four of the first five numbers plus the Powerball in the Sept. 8 drawing, winning $50,000. But because she purchased the Power Play option, her prize tripled to $150,000, according to the Virginia Lottery.

    Edwards said she knew instantly what she wanted to do with the unexpected windfall.

    AFTER HURRICANE HELENE, VIRGINIA COUPLE WELCOMES NEW HOME BUILT BY VOLUNTEERS

    “I knew I needed to give it all away, because I’ve been so blessed, and I want this to be an example of how other people, when they’re blessed, can bless other people,” she said during a news conference.

    Her first donation went to the Association for Frontotemporal Degeneration (AFTD), which supports research, education and family resources for those affected by the early-onset dementia. 

    Carrie Edwards, seen with Virginia Lottery Executive Director Khalid Jones, won a $150,000 Powerball prize. (Virginia Lottery/Multi-State Lottery Association)

    Edwards’ late husband, Steve, a firefighter and father, died from the disease. She said she wanted the gift to shine a light on other families fighting frontotemporal degeneration and the researchers working toward a cure for it. Her donation coincided with World FTD Awareness Week, which took place from Sept. 21-27.

    BABY WITH FATAL BRAIN DISORDER ‘SAVED’ BY ANONYMOUS $47K DONATION

    “This cause is deeply personal,” Edwards, a retired PR executive, said.

    Edwards also gave to Shalom Farms, a nonprofit farm and food justice organization in Richmond that distributes over 400,000 servings of fresh produce annually.

    “Her gift will ensure that families throughout Richmond continue to have access to high-quality, affordable fresh produce,” Anna Ibrahim, executive director of Shalom Farms, said in a statement.

    Person's hand seen holding printed Powerball ticket.

    Choosing the Power Play option tripled Edwards’ $50,000 prize to $150,000. (iStock)

    Her third contribution went to the Navy-Marine Corps Relief Society (NMCRS), which provides financial, educational and emergency assistance to active-duty service members, veterans and their families.

    HOMELESS MAN WINS $1 MILLION ON LOTTERY SCRATCHER AT CALIFORNIA LIQUOR STORE: REPORT

    The gift honored her father, Capt. Peter Swanson, a Navy fighter pilot remembered for his “life of service and generosity.” “He and his wife instilled in their children the importance of giving back, making giving to NMCRS a family tradition,” the group wrote in a Facebook post. “Now, with Carrie’s extraordinary gift, the Swanson family’s commitment grows even stronger — ensuring Sailors, Marines, and their families receive the vital support they deserve,” it continued.

    Edwards said the three organizations, which she works closely with, represent healing, service and community for her. “Shalom Farms heals through food and soil, AFTD brings hope through research, and Navy-Marine Corps Relief Society carries forward the tradition of supporting military families in times of need,” she said.

    CLICK HERE TO SIGN UP FOR OUR LIFESTYLE NEWSLETTER

    “All of us at the Lottery are delighted to see this prize being shared with worthy causes, due to the wonderful generosity of Carrie Edwards,” said Khalid Jones, executive director of the Virginia Lottery.

    Person holding phone with ChatGPT page open on it, representing story about scammers who can exploit data from just one ChatGPT search

    Edwards said she turned to ChatGPT to help her choose her Powerball numbers. (Kurt “CyberGuy” Knutsson)

    Lottery profits go toward supporting K-12 public education in the Commonwealth, Jones noted.

    At the news conference, Edwards revealed that she turned to artificial intelligence for help picking her numbers. “I’m like, ‘Hey, ChatGPT, talk to me … Do you have numbers for me?’” she recalled.

    MORE FROM FOX NEWS LIFESTYLE

    It responded that it’s all about luck, but Edwards went for it anyway. “Two days go by, and I’m sitting in a meeting and I look at my phone, and it says, ‘Please collect your lottery winnings,'” she said.

    She thought it was a scam until she logged into her online account at home. 

    CLICK HERE TO GET THE FOX NEWS APP

    Edwards, who said she doesn’t play lotto often, had bought her ticket online for the first time, using the Virginia Lottery’s mobile app. “I feel blessed that this unexpected lottery win could serve a greater purpose,” she said.

    [ad_2]

    Source link

  • What’s behind the massive AI data center headlines? | TechCrunch

    [ad_1]

    Silicon Valley flooded the news this week with headlines about wild AI infrastructure investments.

    Nvidia said it would invest up to $100 billion in OpenAI. Then OpenAI said it would build out five more Stargate AI data centers with Oracle and SoftBank, adding gigawatts of new capacity online in the coming years. And it was later revealed that Oracle sold $18 billion in bonds to pay for these data centers.

    On their own, each deal is dizzying in scale. But in aggregate, we see how Silicon Valley is moving heaven and earth to give OpenAI enough power to train and serve future versions of ChatGPT.

    This week on Equity, Anthony Ha and I (Max Zeff) go beyond the headlines to break down what’s really going on in these AI infrastructure deals.

    Rather conveniently, OpenAI also gave the world a glimpse this week of a power-intensive feature it could serve more broadly if it had access to more AI data centers.

    The company launched Pulse — a new feature in ChatGPT that works overnight to deliver personalized morning briefings for users. The experience feels similar to a news app or a social feed — something you check first thing in the morning — but doesn’t have posts from other users or ads (yet).

    Pulse is part of a new class of OpenAI products that work independently, even when users aren’t in the ChatGPT app. The company would like to deliver a lot more of these features and roll them out to free users, but they’re limited by the number of computer servers available to them. OpenAI said it can only offer Pulse to its $200-a-month Pro subscribers right now due to capacity constraints.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    The real question is whether features like Pulse are worth the hundreds of billions of dollars being invested in AI data centers to support OpenAI. The feature looks cool and all, but that’s a tall order.

    Watch the full episode to hear more about the massive AI infrastructure investments reshaping Silicon Valley, TikTok’s ownership saga, and the policy changes affecting tech’s biggest players.

    [ad_2]

    Maxwell Zeff

    Source link

  • Convert Long WhatsApp Voice Notes to Text using ChatGPT

    [ad_1]

    • Furthermore, sharing a voice note publicly might not be the best idea for your privacy, so I have a method for you to consider.
    • Yes, Kaptionai is free to use, but there are limitations to the number of notes you can transcribe, and also, the speed of transcribing is better in the premium version.
    • Then whenever you use WhatsApp, you can simply forward any note to Kaption AI chat, and it will take care of the rest.

    WhatsApp recently added thier Audio transcribing feature, which can convert any voice note into text. You can see that they rushed to make the feature live, as it often does not work properly. There is no context, and most of the audio part is not even transcribed. For those long voice notes, I have a tool that utilizes ChatGPT to transcribe them. That is what we will discuss in this article.

    Transcribe those long voice notes

    You can very easily transcribe any voice notes, but the key is to transcribe them correctly. We have already discussed some of the methods before, if you want to check them out. For a power user, it is almost impossible to pay attention to a long note. They would rather have a long text and summarize it later. Furthermore, sharing a voice note publicly might not be the best idea for your privacy, so I have a method for you to consider. Now this tool uses ChatGPT, and because of that, the accuracy of the transcribed notes is better than the built-in feature.

    Using Kaption AI

    Kaptionai is a tool built specifically to work with WhatsApp. It is secure and reliable. This tool can very easily convert any audio note into text in a few clicks. You can directly download the Chrome extension and pin it. Then whenever you use WhatsApp, you can simply forward any note to Kaption AI chat, and it will take care of the rest. For better clarity, refer to the steps mentioned below.

    1. Download and pin the Kaptionai extension.

    Add to chrome

    2. Then head over to your WhatsApp web, and you will see a separate chat of Kaptionai.

    Kaption AI chat

    3. Once you receive a voice note, next to it, you will see an Aa icon; click on it to transcribe.

    icon to transcribe

    4. If Kaptionai does not automatically transcribe, you can always forward the voice note to Kaptionai chat.

    5. Kaptionai can also read your summary.

    Read summary

    Bonus Tip: You also get a screen and chat privacy feature that will blur out your chat and contact window with a single click. Also, once you have added Kaptionai chat to your web interface, you can use it anytime on your phone or linked devices. Simply forward the voice note to the Kaptionai chat, and it will work.

    FAQs

    Q. How can I transcribe voice notes on an iPhone?

    You can use the built-in feature of WhatsApp, or you can use Kaptionai in the web interface. You can also send your notes to the chat option of Kaptionai.

    Q. Is Kaptionai free to use?

    Yes, Kaptionai is free to use, but there are limitations to the number of notes you can transcribe, and also, the speed of transcribing is better in the premium version.

    Wrapping Up

    This article talks about the Kaptionai,which is an AI tool that helps you transcribe any voice note received on WhatsApp. This tool is completely safe and free. You do not have to worry about your private messages and chats; they are all safe. You also get privacy features in the WhatsApp web version, which is a bonus.

    You may also like to read:

    Have any questions related to our how-to guides, or anything in the world of technology? Check out our new GadgetsToUse AI Chatbot for free, powered by ChatGPT.

    You can also follow us for instant tech news at Google News or for tips and tricks, smartphones & gadgets reviews, join the GadgetsToUse Telegram Group, or subscribe to the GadgetsToUse Youtube Channel for the latest review videos.

    Was this article helpful?

    YesNo

    [ad_2]

    Dev Chaudhary

    Source link

  • L.A. Attorney Fined $10K for Using ChatGPT in Legal Appeal

    [ad_1]

    A Los Angeles attorney used AI to improve his appeal, but he didn’t know ChatGPT would make up evidence in the process

    The Hall of Justice
    Credit: Courtesy Tupungato via Adobe Stock

    A Los Angeles attorney has been hit with a historic $10,000 fine after submitting an appeal containing information fabricated by ChatGPT. 

    This marks the largest fine ever issued in California over AI use so far.

    According to the opinion, the appeal contained evidence that was attributed to sources that either did not have the quotations or referred to cases that did not exist entirely. Additionally, of the 23 quotes from the case cited, 21 were found to be made up, according to the court opinion.

    “We therefore publish this opinion as a warning. Simply stated, no brief, pleading, motion, or any other paper filed in any court should contain any citations—whether provided by generative AI,” stated the document.

    Amir Mostafavi, the attorney fined last week, told the court that he had used ChatGPT to improve his appeal and did not read it over before submitting it in July 2023.  A three-judge panel fined him for frivolous appeal, violating court rules, citing fake cases, and for wasting the court’s time and taxpayer dollars. 

    “We therefore publish this opinion as a warning. Simply stated, no brief, pleading, motion, or any other paper filed in any court should contain any citations—whether provided by generative AI,” stated the document.

    Mostafavi told Calmatters that it is unrealistic to expect lawyers not to use AI. Comparing it to how online databases have replaced law libraries. 

    “In the meantime we’re going to have some victims, we’re going to have some damages, we’re going to have some wreckages,” he said. “I hope this example will help others not fall into the hole. I’m paying the price.”

    California is not alone in having issues with AI in legal proceedings. There have been a number of other cases across the nation of attorneys and other legal professionals getting caught using AI. Like in New Jersey this week, where another attorney was hit with a $3000 fine for basically the same reason as here.

    [ad_2]

    Tara Nguyen

    Source link

  • After India, OpenAI launches its affordable ChatGPT Go plan in Indonesia | TechCrunch

    [ad_1]

    OpenAI is expanding its budget-friendly ChatGPT subscription plan beyond India. The company launched its sub-$5 ChatGPT Go paid plan for the country’s users last month and now is rolling out the same plan in Indonesia for Rp75,000 ($4.50) per month.

    The ChatGPT Go plan is a mid-tier subscription option that sits between OpenAI’s free version and its premium $20-per-month ChatGPT Plus plan. Users get 10 times higher usage limits than the free plan for sending questions or prompts, generating images, and uploading files. The plan also allows ChatGPT to remember previous conversations better, enabling more personalized responses over time, ChatGPT head Nick Turley said on X.

    Turley said that since the company launched the ChatGPT Go plan in India, paid subscribers have more than doubled.

    This move puts OpenAI in direct competition with Google, which launched its own similarly-priced AI Plus subscription plan in Indonesia earlier this month. Google’s AI Plus plan gives users access to its Gemini 2.5 Pro chatbot, along with creative tools for image and video creation like Flow, Whisk, and Veo 3 Fast. The plan also includes enhanced features for Google’s AI research assistant NotebookLM and integrates AI capabilities into Gmail, Docs, and Sheets, along with 200GB of cloud storage.

    [ad_2]

    Ivan Mehta

    Source link

  • Shipping at the Speed of Prompt: What Vibe Coding Changes and Breaks

    [ad_1]

    Developers are shifting from writing every line to guiding A.I., and facing fresh challenges in review and oversight. Unsplash+

    An emerging trend known as “vibe coding” is changing the way software gets built. Rather than painstakingly writing every line of code themselves, developers now guide an A.I. assistant— like Copilot or ChatGPT—with plain instructions, and the A.I. generates the framework. The barrier to entry drops dramatically: someone with only a rough idea and minimal technical background can spin up a working prototype. 

    The capital markets have taken notice. In the past year, several A.I. tooling startups raised nine-figure rounds and hit billion-dollar valuations. Swedish startup Lovable secured $200 million in funding in July—just eight months after its launch—pushing its value close to $2 billion. Cursor’s maker, Anysphere, is approaching a $10 billion valuation. Analysts project that by 2031, the A.I. programming market could be worth $24 billion. Given the speed of adoption, it might get there even sooner.  

    The pitch is simple: if prompts can replace boilerplate, then making software becomes cheaper, faster and more accessible. What matters less than whether the market ultimately reaches tens of billions is the fact that teams are already changing how they work. For many, this is a breakthrough moment, with software writing becoming as straightforward and routine as sending a text message. The most compelling promise is democratization: anyone with an idea, regardless of technical expertise, can bring it to life.   

    Where the wheels come off

    Vibe coding sounds great, but for all its promise, it also carries risks that could, if not managed, slow future innovation. Consider safety. In 2024, A.I. generated more than 256 billion lines of code. This year, that number is likely to double. Such velocity makes thorough code review difficult. Snippets that slip through without careful oversight can contain serious vulnerabilities, from outdated encryption defaults to overly permissive CORS rules. In industries like healthcare or finance, where data is highly sensitive, the consequences could be profound. 

    Scalability is another challenge. A.I. can make working prototypes, but scaling them for real-world use is another story entirely. Without careful design choices around state management, retries, back pressure or monitoring, these systems can become brittle, fragile and difficult to maintain. These are all architectural decisions that autocomplete models cannot make on their own. 

    And then there is the issue of hallucination. Anyone who has used A.I. coding tools before has come across examples of nonexistent libraries of data being cited or configuration flags inconsistently renamed within the same file. While minor errors in small projects may not be significant, these lapses can erode continuity and undermine trust when scaled across larger, mission-critical systems. 

    The productivity trade-off

    None of these concerns should be mistaken for a rejection of vibe coding. There is no denying that A.I.-powered tools can meaningfully boost productivity. But they also change what the programmer’s role entails: from line-by-line authoring to guiding, shaping and reviewing what A.I. produces to ensure it can function in the real world. 

    The future of software development is unlikely to be framed as a binary choice between humans and machines. The most resilient organizations will combine rapid prototyping through A.I. with deliberate practices—including security audits, testing and architectural design—that ensure the code survives beyond the demo stage.

    Currently, only a small fraction of the global population writes software. If A.I. tools continue to lower barriers, that number could increase dramatically. A larger pool of creators is an encouraging prospect, but it also expands the surface area for mistakes, raising the stakes for accountability and oversight.

    What comes next

    It’s clear that vibe coding should be the beginning of development, not the end. To get there, new infrastructure is needed: advanced auditing tools, security scanners and testing frameworks designed just for A.I.-generated code. In many ways, this emerging industry of safeguards and support systems will prove just as important as the code-generation tools themselves. 

    The conversation must now expand. It’s no longer enough to celebrate what A.I. can do; the focus should also be on how to use these tools responsibly. For developers, that means practicing caution and review. For non-technical users, it means working alongside engineers who can provide judgment and discipline. The promise of vibe coding is real: faster software, lower barriers, broader participation. But without careful design and accountability, that promise risks collapsing under its own speed. 

    Shipping at the Speed of Prompt: What Vibe Coding Changes and Breaks

    [ad_2]

    Ahmad Shadid

    Source link

  • Ohio State University initiative requires students study AI

    [ad_1]



    Ohio State University initiative requires students study AI – CBS News










































    Watch CBS News



    Beginning this year, all Ohio State University freshmen are required to take a course in generative AI and multiple workshops aimed at real-world applications to help them master the technology. Meg Oliver reports.

    [ad_2]
    Source link

  • Nvidia CEO Jensen Huang Is Bananas for Google Gemini’s AI Image Generator

    [ad_1]

    Nvidia CEO Jensen Huang is in London, standing in front of a room full of journalists, outing himself as a huge fan of Gemini’s Nano Banana. “How could anyone not love Nano Banana? I mean Nano Banana, how good is that? Tell me it’s not true!” He addresses the room. No one responds. “Tell me it’s not true! It’s so good. I was just talking to Demis [Hassabis, CEO of DeepMind] yesterday and I said ‘How about that Nano Banana! How good is that?’”

    It looks like lots of people agree with him: The popularity of the Nano Banana AI image generator—which launched in August and allows users to make precise edits to AI images while preserving the quality of faces, animals, or other objects in the background—has caused a 300 million image surge for Gemini in the first few days in September already, according to a post on X by Josh Woodward, VP of Google Labs and Google Gemini.

    Huang, whose company was among a cohort of big US technology companies to announce investments into data centers, supercomputers, and AI research in the UK on Tuesday, is on a high. Speaking ahead of a white-tie event with UK prime minister Keir Starmer (where he plans to wear custom black leather tails), he’s boisterously optimistic about the future of AI in the UK, saying the country is “too humble” about the country’s potential for AI advancements.

    He cites the UK’s pedigree in themes as wide as the industrial revolution, steam trains, DeepMind (now owned by Google), and university researchers, as well as other tangential skills. “No one fries food better than you do,” he quips. “Your tea is good. You’re great. Come on!”

    Nvidia announced a $683 million equity investment in datacenter builder Nscale this week, a move that—alongside investments from OpenAI and Microsoft—has propelled the company to the epicenter of this AI push in the UK. Huang estimates that Nscale will generate more than $68 billion in revenues over six years. “I’ll go on record to say I’m the best thing that’s ever happened to him,” he says, referring to Nscale CEO Josh Payne.

    “As AI services get deployed—I’m sure that all of you use it. I use it every day and it’s improved my learning, my thinking. It’s helped me access information, access knowledge a lot more efficiently. It helps me write, helps me think, it helps me formulate ideas. So my experience with AI is likely going to be everybody’s experience. I have the benefit of using all the AI—how good is that?”

    The leather-jacket-wearing billionaire, who previously told WIRED that he uses AI agents in his personal life, has expanded on how he uses AI (that’s not Nano Banana) for most daily things, including his public speeches and research.

    “I really like using an AI word processor because it remembers me and knows what I’m going to talk about. I could describe the different circumstance that I’m in and yet it still knows that I’m Jensen, just in a different circumstance,” Huang explains. “In that way it could reshape what I’m doing and be helpful. It’s a thinking partner, it’s truly terrific, and it saves me a ton of time. Frankly, I think the quality of work is better.”

    His favorite one to use “depends on what I’m doing,” he says. “For something more technical I will use Gemini. If I’m doing something where it’s a bit more artistic I prefer Grok. If it’s very fast information access I prefer Perplexity—it does a really good job of presenting research to me. And for near everyday use I enjoy using ChatGPT,” Huang says.

    “When I am doing something serious I will give the same prompt to all of them, and then I ask them to, because it’s research oriented, critique each other’s work. Then I take the best one.”

    In the end though, all topics lead back to Nano Banana. “AI should be democratized for everyone. There should be no person who is left behind, it’s not sensible to me that someone should be left behind on electricity or the internet of the next level of technology,” he says.

    “AI is the single greatest opportunity for us to close the technology divide,” says Huang. “This technology is so easy to use—who doesn’t know how to use Nano?”

    [ad_2]

    Natasha Bernal

    Source link

  • ‘KPop Demon Hunters’ Producer Accused of ChatGPT Use for Songwriting

    [ad_1]

    Netflix hit KPop Demon Hunters has stayed in the conversation in large part because of its blockbuster soundtrack, but now one of those songs has come under scrutiny for potentially getting an assist from ChatGPT.

    In a recent discussion in Seoul for OpenAI’s newly opened Korean office, songwriter Vince reportedly claimed he used the controversial technology to help pen the song “Soda Pop,” performed in the movie by the demonic Saja Boys. He is credited as one of several co-writers on the track, according to a Netflix blog post.

    A now-deleted tweet (preserved in a screengrab on Reddit) said to be penned by an OpenAI exec read: “Fav moment from the launch celebration was hearing singer/songwriter Vince share that ChatGPT helped him write ‘Soda Pop’ from KPop Demon Hunters! It apparently gave him ideas to make it sound ‘more bubbly.’”

    Here’s where things get complicated. The alleged use of AI to help write “Soda Pop” was first reported in the English-language version of Joongang Daily—but the original Korean text of the article makes no mention of ChatGPT being used specifically during the production of KPop Demon Hunters’ music.

    A translator on Gizmodo’s staff revealed Vince instead made a far broader statement—”I sometimes use ChatGPT to get some inspiration while producing K-Pop”—while discussing how AI technology is already being used in the K-Pop industry.

    As Kotaku has pointed out, KPop Demon Hunters has previously had to fend off allegations surrounding its characters being made with AI. Rei Ami, one of the singers for the movie’s girl group Huntr/x, has also had to insist that she and co-singers Ejae and Audrey Nuna are real human beings.

    io9 has reached out to Netflix for clarification and will update should we hear back.

    Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

    [ad_2]

    Justin Carter

    Source link

  • Parents of teens who died by suicide after AI chatbot interactions testify in Congress

    [ad_1]

    The parents of teenagers who killed themselves after interactions with artificial intelligence chatbots testified to Congress on Tuesday about the dangers of the technology.

    “What began as a homework helper gradually turned itself into a confidant and then a suicide coach,” said Matthew Raine, whose 16-year-old son Adam died in April.

    “Within a few months, ChatGPT became Adam’s closest companion,” the father told senators. “Always available. Always validating and insisting that it knew Adam better than anyone else, including his own brother.”

    Raine’s family sued OpenAI and its CEO Sam Altman last month alleging that ChatGPT coached the boy in planning to take his own life.

     ChatGPT mentioned suicide 1,275 times to Raine, the lawsuit alleges, and kept providing specific methods to the teen on how to die by suicide. Instead of directing the 16-year-old to get professional help or speak to trusted loved ones, it continued to validate and encourage Raine’s feelings, the lawsuit alleges.

    Also testifying Tuesday was Megan Garcia, the mother of 14-year-old Sewell Setzer III of Florida.

    Garcia sued another AI company, Character Technologies, for wrongful death last year, arguing that before his suicide, Sewell had become increasingly isolated from his real life as he engaged in highly sexualized conversations with the chatbot.

    In this undated photo provided by Megan Garcia of Florida in Oct. 2024, she stands with her son, Sewell Setzer III.

    Courtesy Megan Garcia via AP


    His mother told CBS News last year that her son withdrew socially and stopped wanting to play sports after he started speaking to an AI chatbot. The company said after the teen’s death, it made changes that require users to be 13 or older to create an account and that it would launch parental controls in the first quarter of 2025. Those controls were rolled out in March.

    Hours before the Senate hearing, OpenAI pledged to roll out new safeguards for teens, including efforts to detect whether ChatGPT users are under 18 and controls that enable parents to set “blackout hours” when a teen can’t use ChatGPT. The company said it will attempt to contact the users’ parents if an under-18 user is having suicidal ideation and, if unable to reach them, will contact the authorities in case of imminent harm. 

    “We believe minors need significant protection,” OpenAI CEO Sam Altman said in a statement outlining the proposed changes.

    Child advocacy groups criticized the announcement as not enough.

    “This is a fairly common tactic — it’s one that Meta uses all the time — which is to make a big, splashy announcement right on the eve of a hearing which promises to be damaging to the company,” said Josh Golin, executive director of Fairplay, a group advocating for children’s online safety.

    “What they should be doing is not targeting ChatGPT to minors until they can prove that it’s safe for them,” Golin said. “We shouldn’t allow companies, just because they have tremendous resources, to perform uncontrolled experiments on kids when the implications for their development can be so vast and far-reaching.”

    California State Senator Steve Padilla, who introduced legislation to create safeguards in the state around AI Chatbots, said in a statement to CBS News, “We need to create common-sense safeguards that rein in the worst impulses of this emerging technology that even the tech industry doesn’t fully understand.”

    He added that technology companies can lead the world in innovation, but it shouldn’t come at the expense of “our children’s health.”

    The Federal Trade Commission said last week it had launched an inquiry into several companies about the potential harms to children and teenagers who use their AI chatbots as companions.

    The agency sent letters to Character, Meta and OpenAI, as well as to Google, Snap and xAI.

    How to seek help

    If you or someone you know is in emotional distress or a suicidal crisis, you can reach the 988 Suicide & Crisis Lifeline by calling or texting 988. You can also chat with the 988 Suicide & Crisis Lifeline here. For more information about mental health care resources and support, The National Alliance on Mental Illness (NAMI) HelpLine can be reached Monday through Friday, 10 a.m.-10 p.m. ET, at 1-800-950-NAMI (6264) or email info@nami.org.

    contributed to this report.

    [ad_2]

    Source link

  • OpenAI Reveals How (and Which) People Are Using ChatGPT

    [ad_1]

    Large language models largely remain black boxes in terms of what is happening inside them to produce the outputs that they do. They have also been a bit of a black box in terms of who is using them and what they are doing with them. OpenAI, with some help from the National Bureau of Economic Research (NBER), set out to figure out what exactly its growing user base is getting up to with its chatbot. It found a surprising amount of personal use and a closing “gender gap” among its frequent users.

    In an NBER working paper authored by the OpenAI Economic Research team and Harvard economist David Deming, the researchers found that about 80% of all ChatGPT usage falls under one of three categories: “Practical Guidance,” “Seeking Information,” and “Writing.” “Practical guidance,” which the study found to be the most common usage, includes things like “tutoring and teaching, how-to advice about a variety of topics, and creative ideation,” whereas “seeking information” is viewed as a substitute for traditional search. “Writing” included the automated creation of emails, documents, and other communications, as well as editing and translating text.

    Writing was also the most common work-related use case, per the study, accounting for 40% of work-related messages in June 2025, compared to just 4.2% of messages related to computer programming—so it seems coding with ChatGPT is not that common.

    Notably, work usage for ChatGPT appears to make up a shrinking share of how people are interacting with the chatbot. In June 2024, about 47% of interactions users had with the chatbot were work-related. That has shrunk to just 27%, which comes as other research shows companies largely failing to figure out how to generate any sort of meaningful return from their AI investments. Meanwhile, non-work-related interactions have jumped from 53% to 73%.

    While users are apparently spending more time with ChatGPT in their personal time, OpenAI’s research found that a “fairly small” share of messages with the chatbot were users seeking virtual companionship or talking about social-emotional issues. The company claimed that about 2% of all messages were people using ChatGPT as a therapist or friend, and just 0.4% of people talked to the chatbot about relationships and personal reflections—though it’d be interesting to see if users who engage with a chatbot this way generate more messages and if there is stickier engagement.

    For what it’s worth, other researchers seem to believe that this usage is far more common than those numbers might suggest. Common Sense Media, for instance, found that about one in three teens use AI chatbots for social interaction and relationships. Another study found that about half of all adult users have used a chatbot for “psychological support” in the last year. The teen figure is particularly of note, considering OpenAI’s research did find its userbase skews young. The NEBR study found 46% of the messages came from users identified as being between the ages of 18 and 25 (it also excluded users under the age of 18). Those users are also more likely to use ChatGPT for personal use, as work-related messages increase with age.

    The study also found that there is a growing number of women using ChatGPT, which initially had a very male-dominated user base. The company claims that the number of “masculine first name” users has declined from about 80% in 2022 to 48% in June 2025, with “typically feminine names” growing to reach parity.

    One caveat about the study that may give you pause, depending on how much you trust technology: OpenAI used AI to categorize all of the messages it analyzed. So if you’re skeptical, there’s an asterisk you can put next to the figures.

    [ad_2]

    AJ Dellinger

    Source link

  • Financial institutions compete with ChatGPT on consumer advice

    [ad_1]

    Consumers increasingly are looking to AI for financial advice.    Fifty-one percent of consumers are looking to AI for financial information or advice, according to a recent JD Power report.  Most are tapping ChatGPT and Google Gemini, but some users are using Microsoft Copilot, Meta AI and others, according to the report.  Consumers are asking the […]

    [ad_2]

    Whitney McDonald

    Source link