ReportWire

Tag: AI

  • JEEVE Unveils “Spoken for – the Movie”: A 12-Part AI Visual Album

    After decades shaping hits for artists including Carlos Santana, Tupac, Britney Spears, Kat Graham, Todrick Hall, Nicole Scherzinger, Beto Cuevas, La Ley, Luis Fonsi, David Bisbal (Spain), M. Pokora (France), Pixie Lott, and James Arthur (UK), Grammy-winning producer Jeeve steps into the spotlight with a kaleidoscopic visual journey through trauma, identity, and liberation.

    Jean-Yves “Jeeve” Ducornet, Founder of Crystal Ship Music, has been the creative force behind some of music’s most iconic voices for more than 35 years. A Grammy-winning producer, he has shaped records for Carlos Santana, Tupac, Britney Spears, Kat Graham, Todrick Hall, Nicole Scherzinger, Beto Cuevas, La Ley, Luis Fonsi, David Bisbal, Pixie Lott, James Arthur, and many others. His fingerprints are on platinum records and global hits – but with his new project, Spoken For – The Movie, Jeeve finally steps into the spotlight as the artist himself.

    A Cinematic Album Experience

    Two years in the making, Spoken For – The Movie is more than an album. It’s a 12-part visual odyssey – each track paired with its own AI-driven short film. Blending music, cinema, and generative art, the project traces a journey through inherited trauma, creative struggle, fleeting love, and ultimate rebirth.

    Videos from Spoken For – The Movie have already earned honors at film festivals such as Chicago Filmmaker Awards, Video Musika, Mannheim Arts and Film Festival, San Diego Movie Awards, and many more.

    “Our insecurities can be more cinematic than a blockbuster,” Jeeve says. “I wanted to create something that feels part therapy, part prophecy – and completely my own.”

    Highlights from Spoken For-The Movie

    “Why Do I” – A raw opening track confronting generational trauma and emotional paralysis, setting the tone for the journey ahead.

    “That Thing That Makes You Win” – A biting satire of the music industry’s con artists, staged as an 1800s carnival where charisma outshines talent.

    “Good As Perfect” – A wedding song stripped of fantasy, embracing flaws and friction as the true foundation of lasting love.

    The title track, “Spoken For,” delivers the project’s most haunting metaphor: a child silenced by a controlling parent, visualized through an android whose cracking voice box reveals the vulnerable human underneath.

    An Artist Reclaimed

    Half French, half American, Jeeve has long lived in dualities – acclaimed music producer yet anonymous artist. With Spoken For – The Movie, he breaks that pattern. Writing, directing, producing, mixing, and editing virtually every element himself – using tools like Photoshop, Kling AI, Runway, Google Veo 3, Midjourney, and Topaz AI – Jeeve emerges as a true auteur.

    “I’ve made music for others my whole career,” he reflects. “This is the first time I told myself what to do, and I finally listened.”

    Release Date: January 9, 2026

    Platforms: Streaming on all major platforms; visual films premiering on Apple TV, Amazon Prime Video, Google Play Movies, YouTube TV, and Fandango at Home.

    Contact Information

    Jean-Yves Ducornet
    bigjeeve@gmail.com

    Source: Crystal Ship Music

    Source link

  • Bank of America’s focus is ‘getting the data right,’ CEO says

    Bank of America is being judicious in deployment of AI within its operations but already seeing the role the technology is playing in maintaining its expenses.  “We are in a regulated industry,” Chief Executive Brian Moynihan said during the bank’s third-quarter earnings call today. This means the bank has to be certain of its AI […]

    Vaidik Trivedi

    Source link

  • Citizens Bank’s deploys agentic AI in transformation program

    Citizens Bank is deploying AI and agentic AI as part of its Reimagine the Bank program.  “We have a team of executives from across the bank working on cultivating technology and AI-enabled ideas that will empower our colleagues to run the bank better,” Chris Emerson, interim chief financial officer, Citizens Financial Group said during today’s […]

    Whitney McDonald

    Source link

  • The AI Industry’s Scaling Obsession Is Headed for a Cliff

    A new study from MIT suggests the biggest and most computationally intensive AI models may soon offer diminishing returns compared to smaller models. By mapping scaling laws against continued improvements in model efficiency, the researchers found that it could become harder to wring leaps in performance from giant models whereas efficiency gains could make models running on more modest hardware increasingly capable over the next decade.

    “In the next five to 10 years, things are very likely to start narrowing,” says Neil Thompson, a computer scientist and professor at MIT involved in the study.

    Leaps in efficiency, like those seen with DeepSeek’s remarkably low-cost model in January, have already served as a reality check for the AI industry, which is accustomed to burning massive amounts of compute.

    As things stand, a frontier model from a company like OpenAI is currently much better than a model trained with a fraction of the compute from an academic lab. While the MIT team’s prediction might not hold if, for example, new training methods like reinforcement learning produce surprising new results, they suggest that big AI firms will have less of an edge in the future.

    Hans Gundlach, a research scientist at MIT who led the analysis, became interested in the issue due to the unwieldy nature of running cutting edge models. Together with Thompson and Jayson Lynch, another research scientist at MIT, he mapped out the future performance of frontier models compared to those built with more modest computational means. Gundlach says the predicted trend is especially pronounced for the reasoning models that are now in vogue, which rely more on extra computation during inference.

    Thompson says the results show the value of honing an algorithm as well as scaling up compute. “If you are spending a lot of money training these models, then you should absolutely be spending some of it trying to develop more efficient algorithms, because that can matter hugely,” he adds.

    The study is particularly interesting given today’s AI infrastructure boom (or should we say “bubble”?)—which shows little sign of slowing down.

    OpenAI and other US tech firms have signed hundred-billion-dollar deals to build AI infrastructure in the United States. “The world needs much more compute,” OpenAI’s president, Greg Brockman, proclaimed this week as he announced a partnership between OpenAI and Broadcom for custom AI chips.

    A growing number of experts are questioning the soundness of these deals. Roughly 60 percent of the cost of building a data center goes toward GPUs, which tend to depreciate quickly. Partnerships between the major players also appear circular and opaque.

    Will Knight

    Source link

  • The Electric Bill Is Too Damn High, and Both Sides Agree: It’s the Local Data Center’s Fault

    The data center industry is exploding, and server farms are popping up all over the world. As you might’ve guessed, much of the industry’s boom is built around the blossoming generative AI industry, which requires massive amounts of electricity to do stuff like make a picture of Mickey Mouse committing the 9/11 terrorist attacks.

    What is also exploding as a result of the tech industry’s ever more exponential energy needs are electricity bills—for everyone. Consumers are noticing (and unhappy about it), and now, so are the politicians tasked with representing them.

    In some cases, the cost of electricity has increased so significantly that it has presented an easy political platform on which candidates can base their campaigns. Indeed, a recent article from the New York Times shows how the direness of recent utility bills is fueling a gubernatorial race in New England:

    This generally obscure topic has become critical in New Jersey because electricity rates this summer climbed 22 percent from a year earlier — faster than all but one state: Maine. As the governor’s race has tightened and affordability has become a key issue, power costs have become a predominant theme in ads paid for in part by groups associated with both national parties.

    One Bloomberg analysis from September found that electricity “now costs as much as 267% more for a single month than it did five years ago in areas located near significant data center activity.” But it’s not just the people who are unfortunate enough to live close to a data center who are feeling the impact. In August, the New York Times reported that “the average electricity rate for residents has risen more than 30 percent since 2020.” Those numbers are expected to continue to explode over the next five years, with one study from Carnegie Mellon University and North Carolina State University estimating that people in the U.S. living near data centers will see another 25 percent increase.

    The rising costs associated with the data center industry seem to be an ever-more common theme in political races. Semafor quotes several politicians who have recently expressed interest in deterring the data center industry. “I think we should, personally, block all future data centers,” said Patrick Harders, a Republican running for an open county board seat in Virginia.

    “We need to ensure that data centers aren’t built where they don’t belong,” said Geary Higgins, another Republican, in a recent campaign ad. Semafor notes that Higgins’ competitor recently did their own ad in which they also dissed data centers, asking: “Do you want more of these in your backyard?”

    Conservatives have a long history of talking tough when it comes to the tech industry and then doing very little. Democrats, meanwhile, have spent years defending Silicon Valley, despite growing concern from their constituents. However, now that the likes of Elon Musk and Marc Andreessen have thrown their weight behind President Trump, the spell seems to be breaking somewhat.

    Whether politicians earnestly plan to do anything about the threat to your electricity bill is up for debate. What does seem clear is that the specter of the price hikes has offered an easy way for legislators to virtue signal to their constituents that they’re on their side.

    “Electricity is the new eggs,” the New York Times quotes David Springe, executive director of the National Association of State Utility Consumer Advocates, as saying. Damn if that isn’t a ringing indictment of the new Trump economy. Electricity is the New Eggs could be a winning campaign slogan for the Democrats, if they were smart.

    Lucas Ropek

    Source link

  • Newsom Vetoes Bill to Restrict AI Chatbots for Minors

    The governor said the proposed AI restrictions were too broad, even as parents and advocates urged stronger safeguards for minors online

    On Monday, California Governor Gavin Newsom vetoed a bill meant to restrict the usage of AI chatbots for anyone under 18. 

    The bill was proposed by Assemblymember Rebecca Bauer-Kahan’s (D) as the Leading Ethical AI Development for Kids Act (LEAD). It would have restricted any companion chatbot platform, including those from OpenAI and Meta, from being used by a minor if there were obvious potential for harm or sexual conversations. 

    “While I strongly support the author’s goal of establishing necessary safeguards for the safe use of AI by minors, (the bill) imposes such broad restrictions on the use of conversational AI tools that it may unintentionally lead to a total ban on the use of these products by minors,” Newsom said.

    Newsom faced intense pressure on the LEAD Act, including a personal letter from parents who said their son took his own life after ChatGPT became his “suicide coach.” On the opposing side, the tech industry argued that the bill was too broad and would stifle innovation by taking away useful tools for children, such as AI tutoring systems and programs that could detect early signs of dyslexia.

    Common Sense Media, a non-profit organization that reviews and rates media for families, sponsored the LEAD Act, decried the veto. James Steyer, Common Sense Media’s founder and CEO, said in a statement, “It is genuinely sad that the big tech companies fought this legislation, which actually is in the best interest of their industry long-term.”

    Newsom signed a narrower measure, Track authored by Sen. Steve Padilla (D), that will require chatbots to establish protocols to “detect, remove, and respond to instances of suicide ideation users.”  

    Chatbot operators now will have to implement protocols to ensure their system does not deliver self-harm or suicide content to users, as well as place “reasonable measures” to prevent chatbots from encouraging minors to engage in sexually explicit conduct. 

    Anastasia Van Batenburg

    Source link

  • Citi attributes 100K hours of savings to gen AI deployment

    Citigroup is starting to realize returns on its investments in generative AI and tech modernization.  The $1.7 trillion bank reported that its gen AI tool saved employees 100,000 hours during the third quarter. The bank conducted 1 million code reviews in Q3, up from 220,000 code reviews in Q1, according to its Q3 earnings report. […]

    Vaidik Trivedi

    Source link

  • Goldman launches AI-driven One Goldman Sachs 3.0

    Goldman Sachs is leaning into AI to drive efficiency as it launches a new AI operating model across the institution.  The One Goldman Sachs 3.0 model is “propelled by AI,” David Solomon, chief executive, said during the bank’s third-quarter earnings call today. “This is a new, more centralized operating model that we expect to drive […]

    Vaidik Trivedi

    Source link

  • This Report Says AI Stole 17,000 Jobs This Year. The DOGE Effect Is Much Worse 

    AI evangelists continue to insist that AI is improving workers’ efficiency and thus business productivity, freeing up staff from mundane duties to do more meaningful work. Not as many boosters are cheering the fact that it’s just as easy for companies that have gone all in on the new technology to cut labor costs by replacing people’s jobs. According to a new report thousands of jobs have already gone from the job market this year as AI has assumed those duties instead, and fully 7,000 of the losses happened in September alone. All of this may feed into your thinking about rolling out AI at your own company.

    The data, from Chicago-based executive outplacement firm Challenger, Gray & Christmas, attributes 17,375 job losses to adoption of AI tech since the start of 2025. Most of these cuts were made public in the second half of the year, industry news site HRDive reports.

    The numbers are dramatic, especially since a similar report from Challenger in July said that among some 20,000 jobs lost to “automation” in the first half of the year, only 75 were directly connected to AI. Andy Challenger, senior vice president at the firm, told CFODive at the time that the suspicion was that many more jobs were actually lost to AI. “We do see companies using the term ‘technological update’ more often than we have over the past decade, so our suspicion is that some of the AI job cuts that are likely happening are falling into that category,” Challenger said then, also noting that some firms were being careful because they “don’t want press on it.” 

    In the new report, Challenger noted that it’s mainly tech firms that are “undergoing incredible disruption,” because of AI. Challenger also backed up many earlier reports by noting that the buzzy, controversial tech is “not only costing jobs, but also making it difficult to land positions, particularly for entry-level engineers.” 

    HRDive notes that it’s losses at Salesforce that may be linked to those massive AI-related job cuts in recent months, with Salesforce CEO Marc Benioff noting in August that customer service staff numbers were slashed by about 4,000 after AI agents took on some customer handling duties. The interesting wrinkle here is that Salesforce is one of the big tech names that is pivoting aggressively and openly to adopting AI tech, and is even selling it to its customers with the promise that agent-based AIs can save them money. Benioff in early 2025 also said “my message to CEOs right now is that we are the last generation to manage only humans.” In his vision for future company leadership, managers will be steering both AIs and humans through their day to day operations. 

    While 17,000 jobs lost to AI sounds like a lot, it’s dwarfed by other causes, the Challenger report shows. DOGE-related actions is the “leading reason for job cut announcements in 2025,” the report notes, with 293,753 planned layoffs connected to DOGE activities, including reductions to federal workforce numbers and the cutting of contractor deals. Nearly 21,000 more jobs have been lost as part of what Challenger’s report says is “DOGE Downstream Impact,” where funding cuts have hit nonprofits that depend on federal grants. Traditional market and general economic concerns drove another 208,227 cuts in 2025, the report also notes. This means DOGE and the typical workings of the economy are responsible for around 30 times as many job losses than AI.

    But it would be unreasonable to assume AI’s body count won’t rise, considering Big Tech’s push to get AI into the workplace, while developing increasingly capable AI tools that can handle human jobs. And while Challenger notes that tech-centric firms are bearing the brunt of AI-related job cuts right now, it would be sensible to guess that other industries will soon follow.

    What’s the takeaway for your company?

    Primarily that it may be a good idea to reassure your staff that if you’re rolling out AI tools to streamline operations, you’re not actually planning on downsizing your workforce. ”AI won’t be stealing anyone’s job here” is a strong message that will build your team’s trust, assuming that this is actually the case. 

    Another side effect may be a glut of workers in the job marketplace. Since many job seekers are using AI tools to boost their hunt for new employment, you may actually see many more applicants than before for open positions at your company, and your HR team may be quickly overburdened.

    Kit Eaton

    Source link

  • The Intersection Of AI And Cannabis Wellness

    The intersection of AI and cannabis wellness is creating personalized, data-driven wellness experiences

    Artificial intelligence isn’t just revolutionizing tech, it is starting to reshape daily lives. And here is a look at the intersection of AI and cannabis well, how it is—eshaping how people grow, buy, and use cannabis for wellness. From personalized product recommendations to smarter cultivation systems, AI is helping cannabis evolve from a cultural trend to a precision-based wellness industry.

    For Gen Z and millennials, who already blend technology seamlessly into their lifestyles, this intersection feels natural. Cannabis is no longer just about relaxation—it’s about balance, mental clarity, and customized health. AI is making that level of personalization possible.

    RELATED: Gen Z Is More Similar To Boomers In A Surprising Way

    One driving reason the AI-meets-cannabis wellness niche resonates now is that younger generations—especially Gen Z and younger millennials—are grappling with high levels of anxiety. In a 2023 Gallup survey, nearly 47% of Gen Zers (ages 12–26) said they “often” or “always” feel anxious, and more than 20% reported often or always feeling depressed. Among Gen Z young adults aged 18–24, a U.S. Census Bureau–backed survey found that 44% experienced persistent feelings of nervousness or being “on edge.” Compared to older generations, Gen Z is more than twice as likely to report frequent stress or anxiety symptoms, according to cross-generational global research.

    Photo by Xvision/Getty Image

    Younger generations are actively seeking tools and strategies to manage anxiety—including wellness approaches that combine science, personalization, and low stigma. That’s exactly where AI + cannabis for wellness enters the picture: offering a data-driven but holistic option that appeals to digital natives looking for more control over their mental health.

    On the production side, AI-powered sensors and predictive analytics are helping growers fine-tune everything from lighting and humidity to nutrient delivery. The result? Cleaner, more consistent plants with specific cannabinoid and terpene profiles tailored to wellness needs like sleep support, anxiety relief, and focus. Companies such as Bloom Automation and Cultivation Tech are using computer vision to analyze plant health in real time, reducing waste and maximizing yields.

    In dispensaries and online retail, AI-driven recommendation engines—similar to what Spotify or Netflix use—help consumers find products that match their body chemistry and goals. Imagine logging into an app, entering your stress level or sleep quality, and receiving a curated list of strains, edibles, or tinctures designed just for you. This approach appeals to younger consumers who crave transparency, data, and control over what they put in their bodies.

    AI is also improving safety and education. Machine learning tools can flag potentially unsafe combinations of products or alert users to dosage risks. Meanwhile, chatbots trained in medical cannabis knowledge are making trusted information more accessible, especially for first-time users exploring cannabis as an alternative to pharmaceuticals.

    RELATED: Gen Z Is Ditching Relationship Labels While Millennials

    As cannabis moves toward broader federal acceptance, AI could become its most powerful ally—driving innovation, compliance, and credibility. The technology gives the industry a scientific edge, helping shift the narrative from “stoner culture” to “smart wellness.”

    For a generation raised on both self-care and technology, the AI-cannabis partnership feels like the future: personalized, data-driven, and deeply human.

    Anthony Washington

    Source link

  • Why AI Companies Are Racing to Build a Virtual Human Cell

    A human cell is a Rube Goldberg machine like no other, full of biological chain reactions that make the difference between life and death. Understanding these delicate relationships and how they go wrong in disease is one of the central fascinations of biology. A single mistake in a gene can bend the protein it makes into the wrong shape. A misshapen protein can’t do its job. And for want of that protein, the organism–you–may start to fall apart.

    Cells are so complex, however, that getting a sense of how one protein’s failure spreads through the system is tough. Graham Johnson, a computational biologist and scientific illustrator at the Allen Institute for Cell Science, recalls fantasizing at a lunch table, more than 15 years ago, about a computer model of a cell so detailed, so complete, that scientists could watch such processes happening. At that time, “everyone just snickered,” he says. “It was just too unrealistic.”  

    But now some researchers are using AI to take new steps towards the goal of a “virtual cell.” Google’s DeepMind is working on such a project, and the Chan Zuckerberg Initiative (CZI) has made virtual cells a major focus in their Biohub research network, says Theo Karaletsos, senior director of AI at CZI. There is even a new prize, set up by the Arc Institute, for virtual-cell-style models. The goal of all these endeavors is to predict how both healthy and diseased cells work, in so much detail that it’s possible to speed up the development of drugs and accelerate scientific discoveries. Virtual cells might even streamline basic research, some think, moving biologists from the lab bench to the keyboard.  

    What is a virtual cell, anyway?

    The precise definition of a virtual cell varies depending on whom you talk to. Some scientists, like Johnson, hope that a virtual cell will include a visual representation that you can click through and explore. Others think of it primarily as a set of computer programs that can answer questions and make predictions about what’s likely to happen. But the concept is not a new idea. For decades, biologists have been building mathematical models of cellular processes. To make them, researchers draw on data from experiments with real cells, coming up with equations that describe what’s going on. 

    There’s now more data about the human cell than ever before, thanks in part to technology that allows scientists to spy on the activities of individual cells. But figuring out equations for every process and putting them all together is a monumental task. “The old way of doing it”—manually, that is—”had, I would say, only very limited success,” says Stephen Quake, a professor at Stanford University and the former head of science at CZI. Last year, he and other researchers published a paper laying out a vision for another approach, one that feeds data about cells directly to specialized AIs. “You build models that are learning directly from data, rather than trying to write down equations,” he says.

    Read More: Why Baby Boxes Are Suddenly Everywhere

    Quake and his colleagues have had some interesting early results. They used data on cells from 12 different species to train an AI. The AI was then able to make accurate predictions about the cells of species it had never seen before, Quake says. It was also able to infer the relationships between different types of cells in a single species, despite being given no information about those links. “That’s what got me, personally, super-excited about this approach,” Quake says.

    Another team of researchers, including some at Google DeepMind, is exploring using AI to create virtual cells as well. They have trained AIs on large datasets of information about cells, allowing users to ask questions like, “How will this cell respond to this drug?” and then receive answers about which portions of the cell are likely to be affected. 

    These are just some of the approaches scientists are taking toward the creation of virtual cells. It’s likely that there will eventually be many different kinds of virtual cells, designed for different kinds of researchers to use. The virtual cell used by a cancer biologist, for instance, might be different from one used by a cell biologist looking to answer questions about how a given structure evolved. And it is possible they might use both traditional modeling approaches and AI.

    What virtual cells might allow us to do

    Virtual cells could make it faster and easier to discover new drugs. They could also give insight into how cancer cells evade the immune system, or how an individual patient might respond to a given therapy. They might even help basic scientists come up with hypotheses about how cells work that can steer them toward what experiments to do with real cells. “The overall goal here,” Quake says, “is to try to turn cell biology from a field that’s 90% experimental and 10% computational to the other way around.” 

    Some scientists question how useful predictions made by AI will be, if the AI can’t provide an explanation for them. “The AI models, normally, are a black box,” says Erick Armingol, a systems biologist and post-doctoral researcher at the Wellcome Sanger Institute in the U.K. In other words, they give you an answer, but they can’t tell you why they gave you that answer.

    Read More: 11 Things Therapists Wish Every Kid Knew

    “Personally, why I ended up in this field is because I wanted to simulate the whole human body and how the cells connect to each other and interact. So that’s the dream,” he says. Black-box answers might be helpful for steering drug development, but they might not be as useful to basic scientists—at least not the way many AIs are currently set up. (Karaletsos, from CZI, says that some of their AIs are set up to provide explanations of their reasoning. “We want to understand, not just predict,” he says.)

    Johnson, who authored a paper in 2023 about the importance of building virtual cells, hopes that whatever scientists end up building will be capable of being visualized. His ideal is “a visual, interactive, intuitive version of something complicated,” he says. “I think AI is absolutely critical to enabling all this. I’m just not interested in black-box predictions as the primary outcome.”

    Regardless of how they are built, it may be a while before virtual cells of some kind are up and running. “This is not something that’s going to be done next year,” says Quake. “I think it will take a full decade to realize the potential.”

    But since that long-ago lunchtime chat, Johnson says, advances in cell biology and in computer science have fundamentally shifted the prospects of someday having a virtual cell. “I no longer feel like a lunatic just ranting about this,” he says. “It feels plausible now.”

    Veronique Greenwood

    Source link

  • How to Use Gemini As Your Custom Newspaper Service

    • This can save a lot of your time since you do not have to check multiple apps, and you do not have to prompt Gemini to do this every time.
    • I recently discovered that you can use Gemini to act as your personal newspaper, only feeding you the news you are interested in.
    • In this article, I have discussed how you can program Gemini to give you a personalised news digest.

    Gemini can do a lot, and I am still finding new things I can do with its help. I recently discovered that you can use Gemini to act as your personal newspaper, only feeding you the news you are interested in. This sounds a bit out there, but it does work; you only have to let Gemini know your area of interest. This can save a lot of your time since you do not have to check multiple apps, and you do not have to prompt Gemini to do this every time. So in this article, we will learn how to use Gemini to get you all the major headlines without any extra steps.

    Your Morning Headlines now Personalized

    Traditional newspapers are supreme, but Gen-Z and most people are now over them. Most of us like to just wake up and start doomscrolling. However, instead of just hopping on random apps, you get all the major headlines of your interests is always better. This is what you can achieve with a simple prompt using Gemini. Not only can you get the news, but you can also ask Gemini to give you a layover og your entire day. Gemini can do this by accessing your calendar and Emails. All this can be done by writing two simple prompts, and you can find these prompts in the section below.

    Get Personalized News Using Gemini

    You need to have the Gemini application on your smartphone; you can download it from the Play Store and App Store. Once you have downloaded, follow the steps mentioned below.

    1. Open your Gemini application and tap on the profile icon on the upper right of the screen.

    Profile Icon

    2. From there, tap on Saved info.

    Saved Info Setting

    3. In the new menu, you will see an Add button. Tap on it.

    Add button

    4. Enter the prompt mentioned below, and then tap on okay.

    Create a daily morning summary of the latest news stories on world news, tech updates and artificial intelligence news along with major indian economic updates. 

    Prioritize any smartphone news and ai news and also add one finance tip every morning.
    You can share any anime updates if something new pops up.

    The keyword for this is Morning News

    Note – You can also add your Google Calendar’s info and email updates into the daily digest, with a simple tweak to the prompt you had written earlier. However, the results of this are inconsistent; some days it works just fine, and some days Gemini is unable to access Gmail or Calendar to give me updates.

    FAQs

    Q. Can I use Gemini to track my calorie input?

    Yes, you can use Gemini as your personal calorie tracker. You will need to enter all the food items you have consumed, and it can give you a calorie count. Furthermore, if you add what kind of diet you are planning, it can also give you suggestions.

    Q. How to generate viral Gemini images?

    For generating the viral Gemini images, you need to tap on Images when you open Gemini and enter the right prompt. If you have entered the wrong prompt, then the results can vary.

    Wrapping Up

    In this article, I have discussed how you can program Gemini to give you a personalised news digest. This works great for people who are always on the go. Furthermore, you can also ask it to choose the niches you want. I personally like to get financial advice to help me make better decisions, so I added that to my prompt as well. You can make your little changes.

    You may also like to read:

    Have any questions related to our how-to guides, or anything in the world of technology? Check out our new GadgetsToUse AI Chatbot for free, powered by ChatGPT.

    You can also follow us for instant tech news at Google News or for tips and tricks, smartphones & gadgets reviews, join the GadgetsToUse Telegram Group, or subscribe to the GadgetsToUse Youtube Channel for the latest review videos.

    Was this article helpful?

    YesNo

    Dev Chaudhary

    Source link

  • FinAi News launches AI News Tool for smarter content navigation

    FinAi News is pleased to announce the launch of its AI News Tool, a new feature designed to enhance how you, our subscribers, engage with our extensive library of financial technology coverage. The tool provides an intelligence-driven content experience that makes navigating and discovering insights across FinAi News faster and more intuitive.

    Explore the AI News Tool here, and find it on our site menu right next to News. 

    The AI News Tool delivers personalized recommendations, enabling you to quickly identify the articles and topics most relevant to you.  

    The enhanced search functionality allows for content exploration by related topics, executives, financial institutions and more. Whether searching for in-depth coverage of industry leaders or the latest updates on specific markets, subscribers can now access information with greater ease. 

    For example, search the latest on what JPMorgan Chief Executive Jamie Dimon is saying about AI or which financial institutions are investing in agentic AI. The AI News Tool will offer insights and recommend articles to answer the questions.

    By combining seamless navigation, smart recommendations and advanced search, the AI News Tool empowers you, our subscribers, to unlock the full value of FinAi News content. 

    Whitney McDonald

    Source link

  • Oracle studies scammers, embeds AI into AML to thwart them

    Tech company Oracle is developing anti-money laundering tools with gen AI amid climbing fraud and financial crime.  Nearly $2 trillion — or roughly 5% of global GDP — is expected to be laundered this year, according to a Moody’s Ratings report published in June. “Criminals are more incentivized … than investigators because, for criminals, it’s […]

    Vaidik Trivedi

    Source link

  • 5 Ways to Get More Out of Your AI Tools During Strategic Planning

    My architecture training taught me that the best technology amplifies existing systems rather than replacing them. And as a former tech founder who scaled to the Inc. 500, I learned firsthand how systematic integration beats random tool adoption.

    Now, coaching dozens of leadership teams on strategy, I consistently see high-performing teams use AI to systematize the most critical and complicated parts of strategic planning and implementation. These teams don’t just use AI as assistants—they embed it into their strategic thinking and replanning processes to accelerate decision-making cycles and improve execution quality in ways their competitors can’t match.

    1. Competitive intelligence acceleration

    Traditional market research happens quarterly and delivers insights too late for strategic advantage. AI integration transforms this into continuous competitive awareness by automating data collection and pattern recognition across multiple sources.

    Set up AI systems to monitor competitor announcements, industry trends, and customer sentiment shifts, then feed this information directly into weekly strategic planning sessions. A leadership team implemented this approach and discovered a competitor’s pivot strategy three months before it became public knowledge, allowing them to adjust their product roadmap and capture market share the competitor had targeted.

    2. Decision-making bias correction

    Leadership teams consistently fall victim to confirmation bias and groupthink during strategic planning. AI integration addresses this by generating alternative perspectives and challenging assumptions through structured questioning. Configure AI to review strategic proposals and generate counterarguments, alternative scenarios, and questions the team hasn’t considered. During planning sessions, use AI-generated prompts to force examination of blind spots and unstated assumptions.

    One team I worked with used this approach when evaluating a major market expansion and discovered they had overlooked regulatory risks that would have cost millions, ultimately choosing a different expansion strategy that delivered better results.

    3. Strategy translation clarity

    High-level strategic vision often fails because teams struggle to translate broad concepts into specific, measurable objectives. AI integration solves this by systematically breaking down strategic initiatives into concrete action items and success metrics. Input your strategic objectives into AI systems and generate detailed implementation plans, potential obstacles, and measurement frameworks. Use AI to identify gaps between vision and execution before they become problems.

    A leadership team used this method to transform their “digital transformation” goal into 47 specific actions with clear owners and deadlines, achieving their transformation objectives six months ahead of schedule.

    4. Risk assessment automation

    Most teams identify risks reactively after problems emerge rather than proactively during planning phases. AI integration enables continuous risk monitoring and mitigation strategy development before threats materialize. Build AI systems that scan internal operations, market conditions, and external factors for emerging risks, then automatically generate mitigation options for leadership review. Configure alerts for risk threshold breaches and predetermined response protocols.

    A team implemented this approach and identified supply chain vulnerabilities eight weeks before disruptions occurred, allowing them to secure alternative suppliers while competitors faced shortages.

    5. Execution monitoring systems

    Strategic plans fail because teams lack real-time visibility into implementation progress and course correction capabilities. AI integration provides continuous performance monitoring and automated insights into execution effectiveness. Connect AI systems to operational metrics, customer feedback, and team performance indicators to identify execution gaps immediately rather than waiting for quarterly reviews. Generate weekly execution reports highlighting progress against strategic objectives and recommended adjustments.

    One leadership team used this system to identify that their customer acquisition strategy was working but their retention efforts were failing, allowing them to reallocate resources and recover their annual targets.

    Action Items

    Teams that systematically integrate AI into strategic planning consistently outperform competitors who treat AI as separate tools rather than strategic amplifiers. The competitive advantage comes from enhanced decision speed and quality, not from having better technology.

    • Which of your current strategic planning processes would benefit most from continuous AI-powered insights?
    • How could AI-generated alternative perspectives improve your team’s decision-making quality?
    • What strategic blind spots might AI help your leadership team identify before they become problems?

    The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.

    Bruce Eckfeldt

    Source link

  • Samsung Set for Highest Q3 Profit in Three Years as AI Demand Lifts Chip Prices

    Samsung Electronics is expected to post its highest third-quarter profit since 2022, driven by higher memory chip prices supported by server demand as customers rebuild inventories, analysts’ estimates showed.

    The world’s biggest maker of memory chips is projected to report an operating profit of $7.11 billion for the July-September period, according to LSEG SmartEstimate from 31 analysts, which is weighted toward those who are more consistently accurate. This would be up 10 percent from a year earlier.

    Analysts attributed the recovery mainly to better conventional memory chip pricing, which would offset weaker sales volumes of high-bandwidth memory (HBM) chips as Samsung has yet to supply its latest HBM products to Nvidia.

    HBM chips, critical for artificial intelligence (AI) development, are designed to reduce power consumption and process large datasets by stacking chips vertically.

    Analysts said demand for memory chips, particularly from hyperscalers and AI-related investments for services such as ChatGPT, have put more workload on general servers, thus boosting conventional memory chip prices.

    Prices of some DRAM chips, widely used in servers, smartphones and PCs, jumped 171.8 percent in the third quarter from a year earlier, according to TrendForce data.

    While Samsung’s conventional memory business performed well, analysts said delays in supplying its latest 12-layer HBM3E chips to Nvidia have hurt its profit and share price.

    Rivals SK Hynix and Micron have gained more from AI-driven demand, while Samsung’s exposure to China, where advanced chip sales are restricted by the United States, has constrained its growth.

    Analysts said market sentiment toward Samsung’s shares and chip business, including both memory and contract chip manufacturing, is expected to improve as it secures supply deals with major customers such as OpenAI and Tesla.

    Samsung shares have risen more than 43% following its announcement of a chip supply deal with Tesla in July.

    During OpenAI CEO Sam Altman’s visit to South Korea earlier this month, Samsung, SK Hynix and OpenAI announced partnerships to supply advanced memory chips to the Stargate project.

    The AI chip deal between OpenAI and AMD, one of Samsung’s major HBM customers, would also benefit Samsung, said Ryu Young-ho, a senior analyst at NH Investment & Securities.

    Ryu added that Samsung’s $16.5 billion foundry deal with Tesla has lifted expectations that Samsung’s struggling contract chip manufacturing business could win more orders from major tech firms if the company delivers the project as planned.

    While recent AI-driven supply deals signal a positive outlook for Samsung, analysts cautioned that uncertainties remain, including potential U.S. tariffs on chips and China’s tightened export controls on rare earth materials used in advanced chips and manufacturing equipment.

    In September, Micron said it expects to sell out all of its HBM chips for calendar year 2026 in the coming months due to strong demand.

    Samsung will announce its estimates on revenue and operating profit on Tuesday, with full results due later this month.

    Reporting by Heekyong Yang; Editing by Jacqueline Wong

    Reuters

    Source link

  • Hybrid Work Isn’t Dead. It’s Being Optimized

    For office employees, the transition from the traditional, Monday through Friday “9-to-5 work model” office to a more flexible, technology-forward workplace is nearly complete. Before the COVID-19 pandemic, the traditional workplace relied heavily on the physical environment and tools that employees had access to. Hybrid or remote work options were rare, coveted benefits, usually limited to the tech sector. Otherwise, people came into an office daily to do their work surrounded by other colleagues. 

    Today, the workplace looks different as hybrid and remote work options are mainstream. With technology enabling people to work from anywhere—and AI becoming an integral part of the workforce—flexibility has become a strategic lever for many organizations, helping to maximize productivity, streamline overhead, and enhance employee retention. Despite the slew of Return-to-Office (RTO) headlines over the last few years, the reality is many leaders are leaving the traditional work model in the rearview and leaning into hybrid work. 

    The prominence of hybrid work  

    For small and mid-sized businesses (SMBs), the prominence of hybrid work has remained stable, with only minor shifts over the last three years. According to Vistage CEO Confidence Index data, 43% of all SMBs offer hybrid work as of Q3 2025, a decrease of seven percentage points since Q2 2022. Meanwhile, the percentage of workplaces that are fully remote has increased slightly from 8% in Q3 2025 compared to 7% in Q2 2022. 

    This 7% rise in fully onsite work, up to 45% in Q3 2025 from 38% in Q2 2022. is a far cry from recent rhetoric around return to office. Hybrid work is far from dead; for many, it’s the new normal. The once-deafening drumbeat of RTO has lost momentum since early 2025, as CEOs shift their focus to pressing matters such as economic uncertainty, policy confusion, and the three Is of inflation, interest rates, and immigration. As a result, some of the strictest and most inflexible RTO plans have stalled, cementing hybrid work’s place in the modern world. Hybrid work hasn’t disappeared. It has evolved as leaders determine the best mix of in-person and at-home work for their organization’s needs. 

    Why hybrid work didn’t end when the market declined 

    It goes without saying that the job market of 2025 is nowhere near as robust as it was in 2022.  Rather than the Great Resignation, many organizations face the “Great Stay,” with employees holding onto jobs amid uncertainty and a weak market. Enter the rise of “quiet quitters” who are completely unengaged but try to do just enough to avoid being let go. In every workplace, employee engagement remains critical to driving performance, productivity, and ultimately return on investment. Regardless of how strong or weak the job market is, rocking the boat on workplace dynamics is a risk to business success. In the short term, employees may comply, but at what cost?  

    Amid economic uncertainty, geopolitical tensions and the rapid adoption of new technology, hybrid work is playing a key role in how many business leaders are preparing for the year ahead. Here are four ways CEOs are using hybrid work to optimize their business: 

    Reinforcing culture and collaboration 

    When hybrid work first became more widespread during the COVID pandemic, CEOs expressed concerns about fostering company culture and collaboration virtually. Today, many leaders find hybrid work can help elevate and better define culture and collaboration by reinforcing a more intentional approach to time spent in the office. 

    Improving the physical environment  

    A physical environment isn’t just what an office space looks like. Also, it includes all the tools and technology people need to be successful in their roles. Today’s most forward-thinking leaders are creating in-person environments that complement at-home work while improving infrastructure and rethinking floor plans to promote teamwork. 

    Providing more flexibility 

    In Vistage’s most recent survey of CEOs, respondents identified flexibility as the third leg of the workplace. Since experiencing hybrid benefits during shutdowns, employee satisfaction rates have become increasingly reliant on people’s ability to achieve better balance through flexible arrangements. 

    Building better bosses  

    It’s often said that people don’t leave bad jobs. Instead, they leave bad bosses. Bosses play the single most significant role in shaping employee experience. Businesses can only reap the full benefits of their hybrid workforce if managers are equipped to help teams maximize their workflows and technology, while upholding strong communication both in-person and virtually.  

    In today’s world of constant change and instability, the principles of the most productive and engaged workplaces remain the same. Instead of reinstating the traditional 9-to-5 work model, many leaders are leveraging hybrid work to enhance employee engagement and increase productivity. 

    The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.

    Joe Galvin

    Source link

  • Prince Harry and Meghan call out the harmful effects of social media on today’s youth | TechCrunch

    The guests sipped prosecco and chattered away while dessert was served at the third annual Project Health Minds Gala on Thursday night in New York.  

    The evening was winding down, but there was still one big award to give out: Humanitarian of the Year, which this year would be honoring Prince Harry and Meghan, the Duke and Duchess of Sussex, for creating The Parents Network through their nonprofit Archewell Foundation. The Parents Network supports families who have been harmed by social media.  

    Earlier this year, it hosted an event where the faces of young children were shown on giant smartphone screens; the children had lost their lives in ways their parents believe social media contributed to.  

    Thursday’s Gala was hosted by the nonprofit Project Healthy Minds, which provides free access to mental health services, especially focusing on young people who are struggling in a world dominated by technology. The event, and the conference the following day, gave a look into how young people and their parents are seeing social media, and revealed the grave impact these platforms have had on mental health.  

    “Let me share a number with you,” Prince Harry said as he and his wife took the stage to accept the award. “Four thousand. That’s how many families the Social Media Victims Law Center is currently representing.”  

    NEW YORK, NEW YORK – OCTOBER 09: (L-R) Meghan, Duchess of Sussex, Prince Harry, Duke of Sussex, and Phil Schermer, the president of project healthy minds Image Credits:Ilya S. Savenok/Getty Images for Project Healthy Minds) / Getty Images

    That number only represents the parents who have been able to link their child’s harm to social media and who have the capacity to “fight back against some of the wealthiest, most powerful corporations in the world,” said Prince Harry.  

    “We have witnessed the explosion of unregulated artificial intelligence, heard more and more stories from heartbroken families, and watched parents all over the world become increasingly concerned about their children’s digital lives,” he continued.  

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    He said these families were up against corporations and lobbyists that were spending millions to suppress the truth; that algorithms were designed to “maximize data collection at any cost,” and said that social media was preying upon children.  

    Then, he called out Apple for its user privacy violations and Meta for saying privacy restrictions would cost them billions. He spoke about the harms of AI and what happened when researchers, posing as children, tested out an increasingly popular AI chatbot. “They experienced a harmful interaction every five minutes,” he said.  

    “This wasn’t content created by a third party,” he continued. “These were the company’s own chatbots working to advance their own depraved internal policies.”

    The big announcement of the night was that The Parents Network would partner with ParentsTogether, another organization focused on family advocacy and online safety, to do more work protecting children from social media.  

    This is not the first time Prince Harry, in particular, has spoken out about social media harms. Back in April, the prince visited youth leaders in Brooklyn to talk to them about the rising influence of tech platforms, which have been incentivized by profit rather than safety. In January, he and Meghan also called out Meta for undermining free speech after the platform announced it would make changes to its fact-checking policy.  

    The couple’s thoughts about the influence of tech companies do not exist in isolation.

    Numerous studies have shown the negative impact social media is having on young people, creating a mental health crisis and fueling a loneliness epidemic. The next day, on Friday, World Mental Health Day, Project Healthy Minds threw a festival talk about mental health. For a few of those panels, Project Health Minds teamed up with Prince Harry and Meghan’s Archewell Foundation to hold discussions with parents, advocates, and experts about how social media has rewritten and rewritten childhood.

    Following the Gala was a festival about Mental Health  

    The first panel, simply called “How Are Young People Doing in the Digital Age,” was introduced by Harry.  

    One panelist, Katie, spoke about how when she was just 12 years old, TikTok would fill her For You page with videos about dieting and losing weight; Katie ultimately developed an eating disorder.  

    Another panelist was Isabel Sunderland, the policy lead for the organization Design It For Us, which pushes for safer social media.  

    She recalls one day coming across an article about the Myanmar genocide, to which Meta’s platform, Facebook, was later accused of contributing. The article led her down a rabbit hole as she sought to understand how the platforms she uses every day could be used as tools that foment “hate and violence.” She always thought it was her fault that she encountered content regarding harmful topics like eating disorders.  

    “What I came to find through this research is that in fact, it’s designed by social media companies to increase addiction and time spent on their platforms,” she said.  

    NEW YORK, NEW YORK – OCTOBER 10: (L-R) Jiore Craig, Aileen Arreaza, Jayla Stokesberry, Isabel Sunderland, and Katie S. speak onstage during “Surviving Or Thriving: How Are Young People Doing In The Digital Age?” at the Project Healthy Minds World Mental Health Day Festival at Spring Studios. Image Credits:Rob Kim/Getty Images for Project Healthy Minds / Getty Images

    The next panel, focused on childhood, spoke further about the harm social media is causing children. It was introduced by Meghan and moderated by journalist Katie Couric. 

    It began with Jonathan Haidt, the author of the best-selling book and controversial book, “The Anxious Generation,” who presented his findings. 

    Anxiety is up. Depression is up. Children are struggling in school. More children find their lives to be meaningless. There is no more outside playtime. They aren’t learning social cues because they aren’t going outside. Boys are being led down the path to gambling addictions. Young people don’t know how to handle conflict in real life because they aren’t spending time in real life — only online.  

    And while states are trying to pass legislation, it hasn’t been without a fight — the tech lobby’s are working hard.  

    “Play is about brain development,” Haidt told Couric on the panel. “When animals are deprived of play in early childhood, they come out much more anxious in adulthood.”  

    There is even a lessening of proper boredom time — those moments one spends looking out the window during a car ride or staring aimlessly ahead while waiting in a queue. Those moments gave the brain time to rest and have now been replaced by scrolling on tablets and smartphones.  

    Amy Neville, the community manager of The Parents’ Network and President of the Alexander Neville Foundation, joined the panel. She lost her son, Alexander, to an overdose, and is suing Snapchat for providing drug dealers access to her son.  

    NEW YORK, NEW YORK – OCTOBER 10: (L-R) Katie Couric and Jonathan Haidt during “How The Great Rewiring Of Childhood Caused An International Mental Health Crisis, And How We Can Reverse It” at the Project Healthy Minds World Mental Health Day Festival at Spring Studios on October 10, 2025 in New York City.Image Credits:Ilya S. Savenok/Getty Images for Project Healthy Minds / Getty Images

    “I quickly realized that families all over the United States were waking up, finding their kids dead in their bedrooms from pills purchased off of Snapchat,” she said. Her lawsuit is moving forward. “I feel like it’s a fight to the death,” she said. “I’m willing to go there.”  

    Another mother, Kirsten, took the stage. She is the mother of the young girl Katie, who sat on the previous panel. She spoke about how she thought she was doing everything right — checking her daughter’s phone each night and putting it away before she went to sleep. Katie still ended up in the hospital, though, with an eating disorder.  

    Kirsten went through text messages and search history. Someone then sent her an article about how TikTok is showing young girls’ eating disorder content.  

    “My husband and I, we didn’t know about the For You page,” she said. “This was not content that my daughter was seeking, but rather content that was coming to her on repeat.”  

    The consensus of that panel — as with both events — was more action.  

    Throughout the event, people called for more legislative action, more accountability from tech platforms, more speaking, and more people banding together to put boundaries between them and social media. Though harm is said to fill the presence, hope remains around the corner.  

    “We can and we will build the movement that all families and all children deserve,” Meghan said at the Gala. “We know that when parents come together, when communities unite, waves are made. We’ve seen it happen, and we’re watching it grow.”  

    Dominic-Madori Davis

    Source link

  • The Destruction in Gaza Is What the Future of AI Warfare Looks Like

    In 2021, Israel used “the Gospel” for the first time. That was the codename for an AI tool deployed in the 11-day war against Gaza that the IDF has since deemed the first artificial intelligence war. The conclusion of that war didn’t end the conflict between Israel and Palestine, but it was a sign of things to come.

    The Gospel rapidly spews out a mounting list of potential buildings to target in military strikes by reviewing data from surveillance, satellite imagery, and social networks. That was four years ago, and the field of artificial intelligence has since experienced one of the most rapid periods of advancement in the history of technology.

    Marking two years on Tuesday, Israel’s latest offensive on Gaza has been called an “AI Human Laboratory” where the weapons of the future are tested on live subjects.

    Over the last two years, the conflict has claimed the lives of more than 67,000 Palestinians, upwards of 20,000 of whom were children. As of March 2025, more than 1,200 families were completely wiped out, according to a Reuters examination. Since October 2024, the number of casualties provided by the Palestinian Ministry of Health has only included identified bodies, so the real death toll is likely even higher.

    Israel’s actions in Gaza amount to a genocide, a UN Commission concluded last month.

    Hamas and Israel agreed to the first phase of a ceasefire deal that was announced on Wednesday, but Israeli strikes on Gaza were still continuing as of Thursday morning, according to Reuters. The agreed-upon plan involves the release of Israeli hostages by Hamas in exchange for 1,950 Palestinians taken by Israel and the long-awaited aid convoys. But it does not involve the creation of a Palestinian state, which Israel strictly opposes. On Friday afternoon, Israel said that the ceasefire agreement is now in effect, and President Trump has said there will be a hostage release next week. There have been at least three ceasefire agreements since October 7, 2023.

    Aiding Israel’s destruction in Gaza is an unprecedented reliance on artificial intelligence that is, at least partially, supplied by American tech giants. Israel’s use of AI in surveillance and wartime decisions has been documented and criticized time and again by various media and advocacy organizations over the years.

    “AI systems, and generative AI models in particular, are notoriously flawed with high error rates for any application that requires precision, accuracy, and safety-criticality,” Dr. Heidy Khlaaf, chief AI scientist at the AI Now Institute, told Gizmodo. “AI outputs are not facts; they’re predictions. The stakes are higher in the case of military activity, as you’re now dealing with lethal targeting that impacts the life and death of individuals.”

    AI that generates kill lists

    Although Israel has not disclosed its intelligence software fully and denied some of the AI usage claims, numerous media and non-profit investigations paint a different picture.

    Also used in Israel’s 2021 campaign were two other programs called “Alchemist,” which sends real-time alerts for “suspicious movement,” and “Depth of Wisdom” to map out Gaza’s tunnel network. Both are reportedly in use this time around, as well.

    On top of the three programs Israel has previously openly owned up to using, the IDF also utilizes Lavender, an AI system that essentially generates a kill list of Palestinians. The AI calculates a percentage score for how likely a Palestinian is to be a member of a militant group. If the score is high, the person becomes the target of missile attacks.

    According to a report from Israeli magazine +972, the army “almost completely relied” on the system at least in the early weeks of the war, with full knowledge of the fact that it misidentified civilians as terrorists.

    The IDF required officers to approve any of the recommendations made by the AI systems, but according to +972, that approval process just checked whether or not the target was male.

    Many other AI systems that are in use by the IDF are still in the shadows. One of the few programs also unveiled is “Where’s Daddy?” which was built to strike targets inside their family homes, according to +972.

    “The IDF bombed [Hamas operatives] in homes without hesitation, as a first option. It’s much easier to bomb a family’s home. The system is built to look for them in these situations,” an anonymous Israeli intelligence officer told +972.

    AI in surveillance

    The Israeli army also uses AI in its mass surveillance efforts. Yossi Sariel, who led the IDF’s surveillance unit until late last year when he resigned, citing failure to prevent the Oct 7. Hamas attack, spent a sabbatical year training at a Pentagon-funded defense institution in Washington, D.C., where he shared radical visions of AI on the battlefield, according to a professor at the institute who spoke to the Washington Post last year.

    A Guardian report from August found that Israel was storing and processing mobile phone calls made by Palestinians via Microsoft’s Azure Cloud Platform. After months of protests, Microsoft announced last month that it is cutting off access to some of its services provided to an IDF unit after an internal review found evidence that supported some of the claims in the Guardian article.

    Microsoft denies prior knowledge, but the Guardian report paints a different picture. Microsoft CEO Satya Nadella met with IDF’s spying operations head Sariel in late 2021 to discuss hosting intelligence material on the Microsoft cloud, the Guardian reported.

    “The vast majority of Microsoft’s contract with the Israeli military remains intact,” Hossam Nasr, an organizer with No Azure for Apartheid and a former Microsoft worker, told Gizmodo last month.

    When asked for comment, Microsoft directed Gizmodo to a previous statement the tech giant made on the ongoing internal investigation into how its products are used by Israel’s Ministry of Defense.

    On top of storing and combing through data, AI was used in translating and transcribing the gathered surveillance. But an internal Israeli audit, according to the Washington Post, found that some of the AI models that the IDF used to translate communications from Arabic had inaccuracies.

    An Associated Press investigation from earlier this year found that advanced AI models by OpenAI, purchased via Microsoft’s Azure, were used to transcribe and translate the intercepted communications. The investigation also found that the Israeli military’s use of OpenAI and Microsoft technology skyrocketed after Oct 7, 2023.

    AI-driven surveillance efforts don’t just target residents of Gaza and the West Bank, but they have also been used against pro-Palestinian protestors in the United States. An Amnesty International report from August found that AI products by American companies like Palantir were used by the Department of Homeland Security to target non-citizens who speak out for Palestinian rights.

    “Palantir has had federal contracts with DHS for fourteen years. DHS’s current engagement with Palantir is through Immigration and Customs Enforcement, where the company provides solutions for investigative case management and enforcement operations,” a DHS spokesperson told Gizmodo. “At the Department level, DHS looks holistically at technology and data solutions that can meet operational and mission demands.”

    Palantir has not yet responded to a request for comment.

    AI-driven accusations

    The proliferation of AI-generated video and images has done more than just flood the internet with slop. It has also caused widespread confusion for social media users over just what’s real and what’s fake. The confusion is understandable, but it has been co-opted to discredit the voices of the oppressed. In this case, too, Gazans have been at the receiving end of the attacks.

    The videos and photos coming out of Gaza are referred to in Israel as “Gazawood”, with many claiming that the images are staged or completely AI-generated. Since Israel has not allowed foreign journalists into Gaza and not only discredits but also disproportionately targets the enclave’s journalists in air strikes, the truth becomes harder to validate.

    In one instance, Saeed Ismail, a real 22-year-old Gazan who had been raising money online to feed his family, was accused of being AI-generated due to misspelled words on his blanket featured in one video. Gizmodo verified his existence in July.

    American big tech is leading the way

    While Israeli tech startups find a sizable market in the U.S. and deals with government agencies like ICE, the relationship goes both ways.

    It’s tough to precisely map out which American companies have fed the technology used to target and kill Palestinians. But what is available is which Big Tech companies proudly partner with the Israeli army. And the answer to that question is almost all of them.

    Microsoft has received much of the recent attention from activists, but Google, Amazon, and Palantir are considered some of the other top American third-party vendors for the IDF.

    Google and Amazon employees have been protesting for years over “Project Nimbus,” a $1.2 billion contract signed in 2021 that tasks the American tech giants with providing cloud computing and AI services to the Israeli military.

    Amazon suspended an engineer last month for emailing the CEO, Andy Jassy, about the project and speaking out against it in company Slack channels.

    Although Google has also clamped down on employee criticism, when the deal was signed in 2021, Google officials themselves raised concerns that the cloud services could be used for human rights violations against Palestinians, according to a 2024 New York Times report.

    The Israeli military also requested access to Google’s Gemini as recently as last November, according to a Washington Post report.

    Palantir, which offers software like the Artificial Intelligence Platform (AIP) that analyzes enemy targets and proposes battle plans, agreed to a strategic partnership with the IDF to supply its technology to “the current situation in Israel,” Palantir executive vice president Josh Harris told Bloomberg last year.

    Palantir has been under fire globally for its partnership with the Israeli army. Late last year, a major Norwegian investor sold all of its Palantir holdings due to concerns of international human rights law violations. The investing company said that an analysis indicated that Palantir aided an AI-based IDF system that ranked Palestinians based on the likelihood to launch “lone wolf terrorist” attacks, which then led to preemptive arrests.

    CEO Alex Karp has stood behind the company’s decision to back Israel in its war against Gazans many times.

    The IDF has also inked data center deals with Cisco and Dell, and a cloud computing deal with independent IBM subsidiary Red Hat.

    “IBM holds human rights and freedoms in the highest regard, and we are deeply committed to conducting our business with integrity, guided by our robust ethical standards,” IBM told Gizmodo. “As for the UN report, most of its claims are inaccurate and should not be treated as fact.”

    Cisco, Dell, Google, Amazon, and OpenAI did not respond to a request for comment.

    In August, the Washington Post unveiled a 38-page alleged plan for Gaza to become a U.S.-operated tech hub.

    Called the Gaza, Reconstitution, Economic Acceleration and Transformation Trust (or GREAT), the plan involves “temporarily relocating” the remaining two million or so Palestinians to build six to eight AI-powered smart cities, regional data centers to serve Israel, and something called “The Elon Musk Smart Manufacturing Zone.” The plan would convert Gaza into a “trusteeship” administered by the U.S. for at least 10 years.

    Future of AI warfare and surveillance

    AI companies want in on the battlefield.

    There is a huge demand by militaries around the globe for the AI systems provided by tech giants. America is pouring out millions of dollars to integrate AI systems into military decision-making, like identifying strike targets as part of its Thunderforge program. Chinese leader Xi Jinping has also reportedly made military artificial intelligence a top strategic priority.

    As the technology is still in its growing phase, the active war zones and the civilians living there become test subjects for AI-powered killing machines. Similar to Gaza, Ukraine has also been described as a real-time testing ground for AI-powered military technology. In that case, though, the Ukrainian government themselves are also on board with it.

    Over the summer, the Ukrainian military announced “Test in Ukraine,” a scheme that invites foreign arms companies to test out their latest weapons on the front lines of the Russia-Ukraine war.

    On top of its abundant deals with the Israeli army, Palantir is also very popular with the American Department of Defense. The company inked a $10 billion software and data contract with the U.S. Army in August.

    One could argue that profit will always override every other incentive, but even Palantir drew a line recently when asked to participate in a controversial UK digital identification program, arguing that the program needed to be “decided at the ballot box,” according to the Times.

    We’ve seen tech companies back away from military projects, like Project Maven, in the past when they felt the cultural winds blowing against them. For now, the Trump administration wants Americans leading the way on the AI battlefield. While external criticism and internal pressure from employees still exist at the biggest AI firms, they currently have a plausible argument that this is what the American people voted for. Until that changes, the gold rush for military funds will persist.

    Rhett Jones

    Source link

  • It’s not too late for Apple to get AI right | TechCrunch

    This week, OpenAI announced that apps can now run directly inside ChatGPT, letting users book travel, create playlists, and edit designs without switching between different apps. Some immediately declared it the app platform of the future — predicting a ChatGPT-powered world where Apple’s App Store becomes obsolete.

    But while OpenAI’s app platform presents an emerging threat, Apple’s vision for an improved Siri — though still seriously delayed — could still play out in its favor.

    After all, Apple already controls the hardware, the operating system, and has roughly 1.5 billion iPhone users globally, compared to ChatGPT’s 800 million weekly active users. If Apple’s bet pays off, it could position the iPhone maker in a way that would not only maintain its app industry dominance but also modernize how we use apps in the AI era.

    Apple’s plan is to kill the app icon without killing the app itself. Its vision for AI-powered computing — introduced at its developer conference last year — would see iPhone users interact with an overhauled version of Siri and a revamped system that changes the way you use apps on your phone. (Imagine less tapping and more talking.)

    Apps are passé, long live apps?

    It’s an idea whose time has come.

    Organizing little tappable icons on your iPhone’s Home Screen to make online information more accessible is a dated metaphor for computing. Meant to resemble a scaled-down version of a computer’s desktop, apps are becoming a less common way for users to interact with many of their preferred online services.

    These days, consumers are just as likely to ask an AI assistant for a recommendation or insight as they are to do a Google search or launch a dedicated, single-purpose app, like Yelp. They’ll talk out loud to their smart speakers or Bluetooth-connected AirPods to play their favorite tunes; they’ll ask a chatbot for business information or a summary of reviews for a new movie or show.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    The AI, a large language model trained on web-scraped data and more, determines what the user wants to know and spits out a response.

    This is arguably easier than scouring through Google’s search results for the right link with the answer. (That’s something Google itself realized over a decade ago, when it started putting answers to user queries right on the search results page.)

    AI is also often easier than finding the right app on your now overcrowded iPhone, launching it, and then interacting with its user interface — which varies from app to app — to perform your task or get an answer to your question.

    Photo by Jakub Porzycki/NurPhoto via Getty ImagesImage Credits:NurPhoto / Contributor (opens in a new window) / Getty Images

    However, ChatGPT’s app system, while seemingly improving on this model, remains locked inside the ChatGPT user experience. It requires consumers to engage in a chatbot-style interface to use their apps, which could require user education. To call up an app, you have to name it as the first word of your prompt or otherwise mention the app by name to get a button that prompts you to “use the app for the answer.” Then, you have to type in an accurate query. (If you mess this up, early tests by Bloomberg indicate you could get stuck on a loading screen with no results!)

    We have to wonder: is this the future of apps, or just the future while there’s no other competition? When another solution becomes available — one that’s built into your iPhone, no less — will consumers keep using ChatGPT, or are they still willing to give Siri another try? We don’t know, but we wouldn’t count out Apple yet, even though Siri has quite a bad reputation to salvage at this point.

    Siri may be an embarrassment as it stands today, but Apple’s overall ecosystem has advantages. For starters, consumers already have the apps they want to use on their phone or know how to find them on the App Store, if not. They’ve used many of these apps for years. Muscle memory goes a long way!

    Meanwhile, there are a few roadblocks to getting started with ChatGPT’s app platform.

    You have to install the app in question, of course; then you have to connect the app to ChatGPT by jumping through a warning-filled permission screen. This process requires you to authenticate with the app using your existing username and password, and to enter the two-factor authentication code, if applicable.

    After this one-time setup, things should be easier. For instance, after you generate a Spotify playlist with AI, it can be launched in the Spotify app with a tap.

    However, this experience won’t differ much from Apple’s plans if Apple is able to make things work as promised. Apple says you’ll be able to talk or text Siri to control your apps.

    There are other disadvantages to the OpenAI app model. You can only interact with one app at a time, instead of being able to switch back and forth between apps — something that could be useful when comparing prices or trying to decide between a hotel room and an Airbnb.

    Using apps within ChatGPT also strips away the branding, design, and identity that consumers associate with their favorite apps. (For those who hate how cluttered Spotify’s app has become, perhaps that’s a good thing. Others, however, will disagree.) And, in some cases, using the mobile app version to accomplish your goals may still be easier than using the ChatGPT app version because of the flexibility the former offers.

    Finally, compelling users to switch app platforms could be difficult when there isn’t an obvious advantage to using apps within ChatGPT — except for the fact that it’s neat that you can.

    Can Apple save Siri’s reputation with AI features?

    In its WWDC 2024 demonstration — which Apple swears was not “demoware” — the company showed how the apps would function under this new system and how they could use other AI features like proofreading.

    Most importantly, Apple told developers that they’ll be able to take advantage of some of its AI capabilities without having to do additional work — like a note-taking app using proofreading or rewriting tools. Plus, developers who have already integrated SiriKit into their apps will be able to do more in terms of having users take action in their apps. (SiriKit, a toolkit for making apps interoperable with Siri and Apple’s Shortcuts, is something developers have been using since iOS 10.)

    These developers will see immediate enhancements when the new Siri rolls out.

    Image Credits:Apple

    Apple said it will focus on categories like Notes, Media, Messaging, Payments, Restaurant Reservations, VoIP Calling, and Workouts, to start.

    Apps in these categories will be able to let their users take actions via Siri. In practice, that means Siri will be able to invoke any item from an app’s menus. For example, you could ask Siri to see your presenter notes in a slide deck, and your productivity app would respond accordingly.

    The apps would also be able to access any text displayed on the page using Apple’s standard text systems. That could make the app interactions feel more natural, without the user having to give specifically worded prompts or commands. For instance, if you had a reminder to wish your grandpa a happy birthday, you could say “FaceTime him” to take that action.

    Image Credits:Apple

    Apple’s existing Intents framework is also being updated to gain access to Apple Intelligence, covering even more apps in categories like Books, Browsers, Cameras, Document Readers, File Management, Journals, Mail, Photos, Presentations, Spreadsheets, Whiteboards, and Word Processors. Here, Apple is creating new “Intents” that are pre-defined, trained, and tested, and making them available to developers.

    That means you could tell the photo-editing app Darkroom to apply a cinematic filter to an image via Siri. Plus, Siri will be able to suggest an app’s actions, helping iPhone users discover what their apps can do and take those actions.

    Developers have been adopting the App Intents framework, introduced in iOS 16, because it offers other functionality to integrate their app’s actions and content with other platform features, including Spotlight, Siri, the iPhone’s Action button, widgets, controls, and visual search features — not just Apple Intelligence.

    Image Credits:Apple

    Also, unlike ChatGPT, Apple runs its own operating system on its own hardware and offers the App Store as a discovery mechanism, the app infrastructure, and developer tools, APIs, and frameworks — not just the AI-powered interface that will help you use your apps.

    Though Apple may have to borrow some AI tech from others to do that last bit, it has the data to personalize your app recommendations, and, for the privacy-minded, the controls that let you limit how much information apps themselves can collect. (Where’s the “Do Not Track” option for ChatGPT’s app system, we wonder?)

    OpenAI’s system doesn’t work out of the box with all your apps at launch. It requires developer adoption and relies on the Model Context Protocol (MCP), a newer technology for connecting AI assistants to other systems. That’s why ChatGPT currently works with only a handful of apps, like Booking.com, Expedia, Spotify, Figma, Coursera, Zillow, and Canva. MCP adoption is growing, but the delay in its becoming broadly adopted could give Apple the extra time it needs to catch up.

    What’s more, word is that Apple’s AI system is nearly ready. The company is reportedly already internally testing this, allowing users to take actions in apps by using Siri voice commands. Bloomberg reported that this smarter version of Siri works out of the box works with many apps, including those from major players like Uber, AllTrails, Threads, Temu, Amazon, YouTube, Facebook, and WhatsApp. And it’s still on track to ship next year, Apple confirmed to TechCrunch.

    Apple has an iPhone, OpenAI has Jony Ive

    The iPhone’s status as an app platform will also be difficult to disrupt, even from a company as large and powerful as OpenAI.

    The ChatGPT maker understands this, too, which is why OpenAI is exploring its own device with Apple’s former head of design, Jony Ive. It wants its AI to become more of a part of consumers’ everyday lives and habits, which could require a hardware device.

    But, so far, the company has struggled to think up a better computing paradigm than the smartphone, reports indicate. At the same time, the general public has demonstrated an aversion to always-on AI devices, which bump up against existing social norms and threaten privacy.

    The AI backlash has covered AI device maker Friend’s NYC subway posters, led Taylor Swift fans to attack their idol for dabbling in AI, and threatened the reputation of popular consumer brands and enterprise businesses alike. That leaves the future success of an OpenAI device in question.

    For now, that means OpenAI’s app model is one that essentially boils down to using its app to control other apps.

    If Apple gets its Siri upgrade right, that intermediary may not be necessary.

    Sarah Perez

    Source link