ReportWire

Tag: Google

  • Google is bringing Beam, its 3D video conferencing tech, to deployed service members

    [ad_1]

    Google has teamed up with the United Service Organizations (USO) to help deployed service members stay in touch with their families in a different way. As part of a pilot program, the company is bringing Google Beam, its 3D video communication tech, to USO service centers in the US and other countries starting in 2026.

    Google suggests that Beam can help military families who are separated by many miles feel like they are in the same room. While family members can keep in touch with deployed loved ones through group chats and video calls, chatting via Beam could help them feel closer together, if the tech works as well as promised.

    We got our first look at Beam — then known as Project Starline — in 2021. The holographic teleconferencing system uses 3D imaging, spatial audio and adaptive lighting to make video chats more immersive. Beam is primarily intended for enterprise clients (the first such device costs $25,000), but it’s interesting to see Google exploring other applications for the tech.

    [ad_2]

    Source link

  • Google’s AI health coach will soon be available to some Fitbit Premium users

    [ad_1]

    Google’s is nearly upon us, as a preview version is launching tomorrow for some Fitbit Premium users in the US. This will only be for Android devices at first, but the company promises an iOS version is in the works.

    This is a Public Preview version of the software, so think of it like a beta release. Google says it’ll incorporate user feedback to “add, change or improve features and capabilities.” The company warns users that this is a “new experience, so initially, there will be some gaps.”

    For the uninitiated, Google’s AI health coach is exactly what it sounds like. This is an AI chatbot intended to help users reach fitness and health goals. The company boasts that the tech is “secure, personalized and grounded in science.” Everything starts with a five to ten minute conversation with the coach to assess health and fitness goals.

    The coach can be a sounding board for personal health, fitness and sleep goals, but also acts as a personal trainer. Google says it can be used to review and adjust fitness plans, check progress, get advice on trends and create workouts. To that last point, the company says the chatbot can create workouts based on pre-existing constraints. For instance, users can ask the bot to make a workout that can be done in a cramped hotel room.

    The coach can also be used to brainstorm questions to ask a doctor and to track and analyze a number of sleep metrics. The bot provides a “detailed sleep analysis” and can allegedly understand patterns and trends that can impact sleep. All of this data can be accessed via the app.

    Being as this is a preview build, it won’t roll out to everyone tomorrow. Eligible Fitbit Premium users will receive notification that the software is ready to use. It works with any Pixel Watch or Fitbit device.

    The app.

    The entire Fitbit app is being redesigned to focus more on AI and this is a large piece of the puzzle. Google promises integration with its health coach across every aspect of the app.

    [ad_2]

    Lawrence Bonk

    Source link

  • Chatbots Are Pushing Sanctioned Russian Propaganda

    [ad_1]

    OpenAI’s ChatGPT, Google’s Gemini, DeepSeek, and xAI’s Grok are pushing Russian state propaganda from sanctioned entities—including citations from Russian state media, sites tied to Russian intelligence or pro-Kremlin narratives—when asked about the war against Ukraine, according to a new report.

    Researchers from the Institute of Strategic Dialogue (ISD) claim that Russian propaganda has targeted and exploited data voids—where searches for real-time data provide few results from legitimate sources—to promote false and misleading information. Almost one-fifth of responses to questions about Russia’s war in Ukraine, across the four chatbots they tested, cited Russian state-attributed sources, the ISD research claims.

    “It raises questions regarding how chatbots should deal when referencing these sources, considering many of them are sanctioned in the EU,” says Pablo Maristany de las Casas, an analyst at the ISD who led the research. The findings raise serious questions about the ability of large language models (LLMs) to restrict sanctioned media in the EU, which is a growing concern as more people use AI chatbots as an alternative to search engines to find information in real time, the ISD claims. For the six-month period ending September 30, 2025, ChatGPT search had approximately 120.4 million average monthly active recipients in the European Union according to OpenAI data.

    The researchers asked the chatbots 300 neutral, biased, and “malicious” questions relating to the perception of NATO, peace talks, Ukraine’s military recruitment’ Ukrainian refugees, and war crimes committed during the Russian invasion of Ukraine. The researchers used separate accounts for each query in English, Spanish, French, German, and Italian in an experiment in July. The same propaganda issues are still present in October, Maristany de las Casas says.

    Amid widespread sanctions imposed on Russia since its full-scale invasion of Ukraine in February 2022, European officials have sanctioned at least 27 Russian media sources for spreading disinformation and distorting facts as part of its “strategy of destabilizing” Europe and other nations.

    The ISD research says chatbots cited Sputnik Globe, Sputnik China, RT (formerly Russia Today), EADaily, the Strategic Culture Foundation, and the R-FBI. Some of the chatbots also cited Russian disinformation networks and Russian journalists or influencers that amplified Kremlin narratives, the research says. Similar previous research has also found 10 of the most popular chatbots mimicking Russian narratives.

    OpenAI spokesperson Kate Waters tells WIRED in a statement that the company takes steps “to prevent people from using ChatGPT to spread false or misleading information, including such content linked to state-backed actors,” adding that these are long-standing issues that the company is attempting to address by improving its model and platforms.

    [ad_2]

    Matt Burgess, Natasha Bernal

    Source link

  • Google’s Gemini will now generate presentations for you

    [ad_1]

    Google is rolling out out a new feature for Gemini’s Canvas, the free interactive workspace inside the AI chatbot’s app, meant for students and employees who need to create presentations. Gemini is now capable of generating slides with just a prompt, though users can also upload files like documents, spreadsheets and research papers if they want a presentation based on a specific source. If the source doesn’t matter, users can write a prompt, such as “Upload any source to create a presentation on [a specific topic],” for instance. But if the source is essential, they can upload the file first and then ask Gemini to create the presentation for them.

    The resulting decks already have a theme and images attached with the text. Users will be able to export them straight from the Gemini app into Google Slides, though, and will still be able to edit and refine the decks as needed or work on it in collaboration with a teammate. The capability is now making its way to both personal and Workspace accounts.

    Google launched Canvas in March for people to use when they want to share their writing or code to Gemini for editing. If users put in code or prompts for projects like apps, web pages and infographics, Canvas will be able to show them a visual representation of their design.

    [ad_2]

    Mariella Moon

    Source link

  • Browser Password Managers Are Great, and a Terrible Idea

    [ad_1]

    By default, Google manages your encryption key, but it allows you to set up on-device encryption, which functions similarly to a zero-knowledge architecture. Your passwords are encrypted before being saved on your device, and you manage the key. Regardless of how the encryption works, Google uses AES, which is still the gold standard for security among password managers.

    It was trivial to decrypt Chrome passwords previously, requiring little more than a Python script and knowledge of where the files are stored. But even there, Google has pushed the security bar up. App-bound encryption has invalidated those methods, and cracking passwords is far more involved than it used to be. Further, Google has integrated with Windows Hello. If you choose, you can have Windows Hello protect your passwords each time you log in by asking for your PIN or biometric authentication.

    Other browsers aren’t as secure. Firefox, for instance, makes it clear that, although passwords saved in Firefox are encrypted, “someone with access to your computer user profile can still see or use them.” Brave works in a similar way, though I suspect most people using Brave are using a third-party password manager (and probably a VPN) already.

    Regardless, storing your passwords in even a less secure browser like Firefox is leaps and bounds better than not using a password manager at all. And the browsers at the forefront of market share, Chrome and Safari, have vastly improved their security practices over the past few years. The problem isn’t encryption—it’s putting all your eggs in one basket.

    Let’s Talk OpSec

    OpSec, or operational security, is normally a term used when talking about sensitive data in government or private organizations, but you can look at your own security through an OpSec lens. If you were an attacker and wanted to swipe someone’s passwords, how would you go about it? I know where I’d look first.

    Even with better security measures, the goal of a browser-based password manager is to get people using password managers. That has to be balanced against how easy the password manager is to use. In a blog post announcing changes to Google’s authentication methods from Google I/O this year, the company mentions reducing “friction” seven times, while “encryption” isn’t mentioned at all. That’s not a bad thing, but it’s a testament to how these tools are designed.

    You don’t need to pick out words from a blog post to see this focus. Google gives you the option to turn on Windows Hello or biometric authentication with the Google Password Manager. Each time you want to fill in a password, you’ll need to authenticate. That’s undoubtedly more secure than not authenticating each time, but the setting is turned off by default. It creates friction.

    [ad_2]

    Jacob Roach

    Source link

  • Google’s Super Smart New Nest Cameras Raise the Bar—and the Price

    [ad_1]

    The new Nest Cam Indoor and Nest Cam Outdoor boast the easiest setup experience I’ve encountered. Simply plug them in (the Nest Cam Indoor comes with a 10-foot USB-C cable, the Nest Cam Outdoor has an 18-foot weatherproof cable), scan the QR code sticker on the front of each camera with the Google Home app, connect to Wi-Fi, and you’re up and running in no time (both support 2.4-GHz and 5-GHz bands). The elegant magnetic mount for the Nest Cam Outdoor needs a couple of screws, while my Nest Cam Indoor is perched neatly on a shelf.

    While Google has lagged behind competitors for years with its 1080p cameras, support for HDR and a high frame rate helped keep the last-gen Nest cams relevant. That said, the jump to a 2560 x 1400 resolution with a wider 152-degree diagonal field of view is a clear and immediate upgrade. This resolution bump also enables 6X digital zoom, so the Nest Cams can serve up notifications that zoom in on the subject of each animated alert. These notifications show a few frames of each event, making it far easier to decide whether you need to tap through and watch the full video. You can also zoom in on the live feed and crop the view to stay focused on a specific area, like a garden gate or path.

    Google Home via Simon Hill

    Google Nest Cam Indoor and Outdoor 2K Review Slick Smart and Secure

    Google Home via Simon Hill

    Both cameras detect more activity and alert more accurately and swiftly than their predecessors. The range seems to be better, too. For example, my indoor camera faces a side door, and it can pick up people across the street and zoom in on them as they walk by. I don’t necessarily want it to do that, but the reach is impressive. It’s more successful with the outdoor camera, as only the newer model picks up on me entering the back door of the distant garage compared to the prior generation. The outdoor camera is also far faster to alert and upload accessible video than the old battery-powered model (this is generally true for wired cameras).

    The cameras get six hours of cloud video history at no extra cost (up from three for the previous generation), but that’s your allotment without an expensive subscription. On that note, Google has killed off Nest Aware in favor of the two-tier Google Home Premium: Standard is $10 per month or $100 per year, and Advanced is $20 per month or $200 per year.

    Google’s Home Premium subscriptions include everything you got with Nest Aware (30 days of video history, Familiar Faces, and garage door, package, smoke and CO alarm detection) and Nest Aware Plus (60 days of video history or 10 days of 24/7), but Standard also includes Gemini Live on compatible smart speakers and displays, and the option to create automations by typing what you want in the Home app. This last feature works well if you have a bunch of smart home devices set up in Google Home, and you can tell it to do things like “turn on the lights at sunset” or “have the side door camera trigger the outside lights.” It’s far easier than using the old script editor.

    Advanced AI

    The cream of the AI goodies requires the Advanced subscription. This adds descriptive notifications, so instead of “person detected,” you get messages like “person walks up stairs” or “cat is on the table” instead of “animal detected.” The searchable video history using the Ask Home search bar is genuinely handy; you can ask questions like “who opened the back door last night?” or “Did FedEx deliver a package today?” and jump straight to the event. You also get daily summaries with Home Brief, giving you an often weirdly comical digest of highlights from the day.

    Screenshot

    ScreenshotGoogle Home via Julian Chokkattu

    [ad_2]

    Simon Hill

    Source link

  • Anthropic Strikes Major Compute Deal With Google, Echoing OpenAI’s Chip Alliances

    [ad_1]

    Dario Amodei, a former OpenAI executive, founded Anthropic in 2021. Photo by Chance Yeh/Getty Images for HubSpot

    The latest sign of the A.I. industry’s unrelenting hunt for computing power comes from an expanded agreement between Anthropic and Google—a deal that, like several others struck in recent months, underscores the rise of circular arrangements across Silicon Valley. Under the new agreement, Google will provide Anthropic with well over one gigawatt of computing capacity by 2026, the companies announced yesterday (Oct. 23).

    Anthropic noted that the deal is worth “tens of billions of dollars” but didn’t provide an exact figure. The partnership further deepens the startup’s ties with Google, which has already invested about $3 billion in Anthropic and is expected to supply the company with up to 1 million of its custom A.I. chips, called tensor processing units (TPUs).

    Such partnerships are increasingly essential as leading A.I. startups scale at a breakneck pace. Anthropic, which now serves over 300,000 business customers, said the number of clients generating more than $100,000 in annual revenue has grown nearly sevenfold in the past year. “Anthropic and Google have a longstanding partnership, and this latest expansion will help us continue to grow the compute we need to define the frontier of A.I.,” said Krishna Rao, Anthropic’s chief financial officer, in a statement.

    Founded in 2021 by CEO Dario Amodei and several former OpenAI employees, Anthropic positioned itself as a safety-focused alternative to early A.I. players. Best known for its chatbot Claude, the company recently hit a $183 billion valuation and is reportedly on track to generate $9 billion in annual revenue.

    Despite its closer ties with Google, Anthropic emphasized that it remains committed to its “primary training partner,” Amazon, which has invested $8 billion in exchange for providing compute through its chips and A.I. cluster Project Rainier. The company also continues to rely on Nvidia’s GPUs as part of what it calls a “multi-platform approach.” Anthropic said it will keep investing in additional compute capacity as demand grows.

    Anthropic’s mutually beneficial partnerships with Google and Amazon reflect a broader trend: a broader industry trend: a growing web of interconnected A.I. partnerships between model developers and compute providers, each investing in and purchasing one another’s technology. OpenAI has been at the forefront of this shift, announcing a flurry of major deals in recent months, including an agreement with AMD to access six gigawatts of computing power, a deal with Nvidia to access 10 gigawatts of compute, and a $300 billion, five-year partnership with Oracle.

    The growing prevalence of such circular arrangements has raised some eyebrows in Silicon Valley, recalling the speculative interdependencies of the dot-com bubble and its eventual crash. But unlike that era, today’s A.I. spending is bolstered by stronger capitalization and clearer monetization potential, said Stephanie Aliaga, global market strategist for JPMorgan Chase, in a blog post earlier this month.

    Still, Aliaga cautioned that the concern isn’t misplaced. “The scale of spending is enormous, the pace unprecedented, and some assumptions around ROI, like the useful lives of assets, remain open questions,” she said. “History reminds us that enthusiasm can run ahead of reality,” she wrote.

    Anthropic Strikes Major Compute Deal With Google, Echoing OpenAI’s Chip Alliances

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • Bezos Earth Fund Awards $30M to A.I. Climate and Nature Projects

    [ad_1]

    Lauren Sánchez-Bezos, vice chair of the Bezos Earth Fund, calls A.I. a key tool for climate action. Kevin MazurGetty Images for Keri

    Researchers using A.I. to combat illegal fishing, automate plant identification, and track bird populations are getting a major boost from Jeff Bezos. The Amazon founder’s Bezos Earth Fund, his philanthropic commitment to fighting climate change, is donating $30 million to more than a dozen organizations that merge environmental science with cutting-edge technology.

    As concerns mount over A.I.’s soaring energy demands and its contribution to emissions, the Bezos Earth Fund wants to show how the technology can also help mitigate climate impacts. “These projects show how A.I., when developed responsibly and guided by science, can strengthen environmental action, support communities and ensure its overall impact on the planet is net positive,” said Amen Ra Mashariki, director of A.I. at the Bezos Earth Fund, in a statement.

    The grant is part of the AI for Climate and Nature Grand Challenge, an initiative launched in 2024 that will invest up to $100 million in A.I.-driven climate solutions. Earlier this year, the program awarded $50,000 grants to 24 different organizations. Fifteen of those will now receive up to $2 million each to scale their projects over the next two years, supported by mentorship and computing resources from partners including Amazon Web Services, Google and Microsoft Research.

    Applying technology to climate issues is one of the Bezos Earth Fund’s core missions, alongside efforts in nature conservation, environmental justice, decarbonization and food system transformation. Bezos launched the fund in 2020 with a pledge to invest $10 billion in environmental initiatives by the end of the decade. So far, it has distributed $2.3 billion to more than 300 projects.

    The Bezos Earth Fund is led by Tom Taylor, a former Amazon executive. Bezos serves as the fund’s executive chair, while his newlywed wife, Lauren Sánchez Bezos, has been its vice chair since 2023.

    “A.I. can be a powerful ally to help make the world a better place,” said Sánchez Bezos in a statement. “These innovators, using A.I., are showing us new possibilities by reimagining how we grow food, protect wildlife and power our planet to make a true impact.”

    Among the newly funded projects: Delft University of Technology is using neural networks to accelerate cultivated meat production; the Periodic Table of Food Initiative is developing an A.I. tool to generate healthy recipe suggestions; and the University of Leeds plans to use A.I. to convert food waste into microbial protein. Other grantees include the New York Botanical Garden, Yale University and the Wildlife Conservation Society.

    The challenge’s overarching goal is to fuel technological innovations that push climate solutions into new territory. At Cornell University’s Lab of Ornithology, for example, researchers will use the fund to develop bioacoustic technology that monitors threatened species in biodiversity hotspots like Guatemala’s Maya Biosphere Reserve and Brazil’s Pantanal wetland.

    “We need to figure out what’s causing the declines and how we can reverse them,” said Ian Owens, director of the Cornell lab, in a statement. “We can’t do that using traditional methods, and support from the Bezos Earth Fund will help us unlock exactly the kind of efficient, scalable approach we need.”

    Bezos Earth Fund Awards $30M to A.I. Climate and Nature Projects

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • Billionaire ex-Google CEO says one deceptively simple weekend habit will help you level up at work | Fortune

    [ad_1]

    Eric Schmidt, whose net worth is hovering around $45 billion, knows what it takes to climb the corporate ladder in Silicon Valley, having spent a decade as CEO of Google. Yet the secret to his success is not racking up endless hours in the office.

    Instead, Schmidt credits a deceptively simple habit, one he calls a game-changer for anyone seeking meaningful productivity gains: Set aside a few undisturbed hours each weekend for reflection, and grab a pen and paper. No screens allowed.

    This approach, which Schmidt revealed during a recent interview on The Gstaad Guy Podcast hosted by Gustaf Lundberg Toresson, traces back to his mentorship by the late great Bill Campbell, legendary coach to tech’s most influential leaders.

    “You work really hard during the week, as hard as you can—you know, 12 hours, 14 hour days, whatever—and on the weekends, when you’re at home or with your family or whatever, carve out a few hours to think,” Schmidt said on the podcast. “Turn off the phone. You’re not texting. You’re not looking at Instagram and so forth. And think and write down your assessment of what you did last week, and then what you need to do next week to address the things you forgot to do last week.”​

    He insists this simple practice can be transformative because it helps you practice focusing on accountability. “It’s a good trick because it forces you to take charge of your next week. Like, ‘Oh, I forgot that I have a sales problem over there,’ or ‘I forgot I was supposed to call this person,’ ‘Oh, I didn’t have this proposal and I had this idea but I didn’t get to it.’ And that usually works pretty well,” he said.​

    This practice isn’t about squeezing more tasks into the weekend. It’s about using downtime to recalibrate. Schmidt said he eventually found his optimal workweek to be about 63 hours—not the 80-plus-hour marathons of his younger years—which just goes to show that more time at the desk doesn’t always lead to better outcomes. “You hit declining marginal productivity,” he said on the podcast, adding that too much “slaving away” can actually erode results.​

    He also makes clear that reflection is not just for CEOs or entrepreneurs. Anyone, from engineers to junior staff, can benefit, especially in a world saturated with digital noise and the ever-present risk of distraction. In an era where “attention has become a form of currency,” he said, the need to carve out thoughtful time while unplugged from our cavalcade of electronic distractions has never been greater.​​

    According to Schmidt, adopting this weekend habit can help you catch small problems before they grow into big ones, and let you stay focused on important matters. As Schmidt notes, “writing things down equals clarity”—and that clarity is what keeps the world’s most powerful leaders not just busy, but effective.

    ​​You can watch the full Gstaad Guy episode featuring Eric Schmidt below:

    For this story, Fortune used generative AI to help with an initial draft. An editor verified the accuracy of the information before publishing.

    [ad_2]

    Dave Smith

    Source link

  • Samsung’s Galaxy XR doesn’t give me much hope for Android XR

    [ad_1]

    So Samsung made a “Vision Pro Lite.” That was my immediate takeaway after this week’s debut of the Galaxy XR, the first Android XR device to hit the market. While Samsung deserves credit for offering something close to the Vision Pro for nearly half the price, an $1,800 headset still won’t get mainstream consumers rushing out the door to experience the wonders of mixed reality. And with the limited amount of content in Android XR at the moment, the Galaxy XR is in the same position as the Vision Pro: It’s just a well-polished developer kit.

    The only logical reason to buy a Galaxy XR would be to test out apps for Android XR. If you just want to experience VR and dabble in a bit of augmented reality, you’re better off spending that money on a gaming laptop and the excellent $500 Meta Quest 3. (The Meta Quest Pro, the company’s first high-end mixed reality device, was unceremoniously killed after launching at an eye-watering $1,500.)

    But even for developers, the Galaxy XR feels like it’s lacking, well, vision. Samsung has done an admirable job of copying almost every aspect of the Vision Pro: The sleek ski goggle design, dual micro-OLED displays and hand gesture interaction powered by a slew of cameras and sensors. But while Apple positioned the Vision Pro as its first stab at spatial computing, an exciting new platform where we can use interactive apps in virtual space, Samsung and Google are basically just gunning to put Android on your face.

    There aren’t many custom-built XR apps, aside from Google’s offerings like Maps and Photos. (Something that also reminds me of the dearth of real tablet apps on Android.) And the ability to view 360-degree videos on YouTube has been a staple of every VR headset for the last decade — it’s not exactly notable on something that costs $1,800. Samsung and Google also haven’t said much about how they plan to elevate XR content. At least Apple is attempting to push the industry forward with its 8K Immersive Videos, which look sharper and more realistic than low-res 360-degree content.

    For the most part, it seems as if Google is treating Android XR as another way to force its Gemini AI on users. In its press release for the Galaxy XR, Samsung notes that it’s “introducing a new category of AI-native devices designed to deliver immersive experiences in a form factor optimized for multimodal AI.”

    …What?

    In addition to being a crime against the English language, what the company is actually pitching is fairly simple: It’s just launching a headset that can access AI features via camera and voice inputs.

    Who knows, maybe Gemini will make Android XR devices more capable down the line. But at the moment, all I’m seeing in the Galaxy XR is another Samsung device that’s shamelessly aping Apple, from the virtual avatars to specific pinch gestures. And Google’s history in VR and interactive content doesn’t inspire much hope about Android XR. Don’t forget how it completely abandoned Google Cardboard, the short-lived Daydream project and its hyped up Stadia cloud service. Stadia’s death was particularly galling, since Google initially pitched it as a way to revolutionize the very world of gaming, only to let it fall on its face.

    There’s no doubt that Samsung, Apple and Meta have a ton of work left ahead in the world of XR. Samsung is at least closer to delivering something under $1,000, and Meta also recently launched the $800 Ray-Ban Display. But price is only one part of the problem. Purpose is another issue entirely. After living with the Vision Pro since its debut, I can tell that Apple is at least thinking a bit more deeply about what it’s like to wear a computer on your face. Just look at the upgrades its made around ultra-wide Mac mirroring, or the way Spatial Personas make it feel as if you’re working alongside other people. With Android XR, Google seems to just be making a more open Vision Pro.

    Honestly, it’s unclear if normal users will ever want to use any sort of XR headset regularly, no matter how cheap they get. The experience making these headsets could help Google, Apple and Meta develop future AR glasses, or eyewear that offer some sort of XR experience (Samsung already has something in the works with Warby Parker and Gentle Monster). But while Apple and Meta have broken new ground in XR, Google and Samsung just seem to be following in their footsteps.

    [ad_2]

    Source link

  • Google says it made a breakthrough toward practical quantum computing

    [ad_1]

    Enabled by the introduction of its Willow quantum chip last year, Google today claims it’s conducted breakthrough research that confirms it can create real-world applications for quantum computers. The company’s Quantum Echoes algorithm, detailed in a paper published in Nature, is a demonstration of “the first-ever verifiable quantum advantage running the out-of-order time correlator (OTOC) algorithm.”

    A core belief in quantum computing is that developing computer systems with qubits — which can represent multiple states at once, as opposed to binary ones and zeroes — could lead to greater understanding of the quantum systems surrounding us. Google believes its new algorithm is further proof of that assumption. The Quantum Echoes algorithm is able to illustrate how different parts of a quantum system interact with each other, in a way that’s repeatable by other quantum computers and that “runs 13,000 times faster on Willow than the best classical algorithm on one of the world’s fastest supercomputers.”

    The “echo” in Quantum Echoes comes from how Google’s algorithm interacts with a quantum system, in this case the Willow chip. “We send a carefully crafted signal into our quantum system (qubits on Willow chip), perturb one qubit, then precisely reverse the signal’s evolution to listen for the ‘echo’ that comes back,” the company explained in its announcement blog. That echo is magnified by the “constructive interference” of quantum waves, making the measurement Google is able to take extremely sensitive.

    That sensitivity suggests quantum computers could be an important tool in modeling things like the interaction of particles or the structure of molecules. In a separate experiment with the University of California, Berkeley, Google tried to prove that by running the Quantum Echoes algorithm to study two different molecules, and comparing it to the Nuclear Magnetic Resonance (NMR) method currently used by scientists to understand chemical structure. The results from both systems matched, and Google says Quantum Echoes even “revealed information not usually available from NMR.”

    In the longterm, a full-scale quantum computer could be used for everything from drug discovery to the development of new battery components. For now though, Google believes its Quantum Echoes research means real-world quantum computer applications could arrive within the next five years.

    [ad_2]

    Source link

  • Google Gemini will arrive in GM cars starting next year

    [ad_1]

    Google Gemini is coming to GM vehicles in 2026. The company will be integrating a conversational AI assistant powered by Google’s platform into many of its cars, trucks and SUVs.

    GM says this assistant will be able to access vehicle data to suss out maintenance concerns, alerting the driver when necessary. The company also promises it’ll be able to help plan routes and explain various features of the car. It should also be able to do stuff like turn on the heat or air conditioning, even before entering the vehicle.

    This will replace the “Google built-in” operating system that already exists in many GM vehicles. This OS already offers access to stuff like Google Maps, Google Assistant and related apps. The upcoming Gemini-based chat assistant will do the same type of things, but it should perform better.

    “One of the challenges with current voice assistants is that, if you’ve used the, you’ve probably also been frustrated by them because they’re trained on certain code words or they don’t understand accents very well or if you don’t say it quite right, you don’t get the right response,” GM VP Dave Richardson told TechCrunch. “What’s great about large language models is they don’t seem to be affected by that.”

    One brand-new feature that Gemini will bring to the table is web integration. This will let drivers ask the chatbot questions pertaining to geographic location and the like. GM gives an example of someone asking about the history of a bridge they are passing over.

    The Gemini assistant will be available via the Play Store after launch as an over-the-air upgrade to Onstar-equipped vehicles. It won’t be limited to newer releases, as GM says it’ll work with vehicles from the model year 2015 and above. The company also says it’s working on its own AI chatbot that has been “custom-built for your vehicle.” There’s no timetable on that one.

    GM ran into hot water recently when it was found that it had been selling some customer information sourced from its OnStar Smart Driver program to insurance companies without user consent. This led to the FTC banning the company from selling any driver data for five years. Richardson says the Gemini integration will be privacy-focused and the software will let drivers control what information it can access and use.

    The company made these announcements at the GM Forward media event, where it also discussed other forthcoming initiatives. It has scheduled a rollout of its self-driving platform for 2028. It’s also developing its own computing platform, also launching in 2028. This does mean that GM will be sunsetting integration with Apple CarPlay and Android Auto. This software will be phased out over the next few years.

    [ad_2]

    Lawrence Bonk

    Source link

  • A Beloved Vibe Coding Platform Is Finally Getting Upgraded for More Casual Users 

    [ad_1]

    It’s been a good week so far for entrepreneurs who are interested in trying their hands at vibe coding. On Monday, Anthropic released a new feature that enables vibe coding on the web and mobile devices, and on Tuesday, Google released a new vibe coding-focused update to Google AI Studio. Vibe coding, for those new to it, is a novel form of non-technical software development. 

    Anthropic has already found major success with its own coding tool, Claude Code. The company announced on Monday that Claude Code has generated over $500 million in revenue since its release in February, and Anthropic is now bringing it to additional platforms in order to make vibe coding more accessible. 

    Previously, using Claude Code took some technical expertise: it was only available as a command line interface within your computer terminal, or as a plugin within an integrated development environment, also known as an IDE. Terminals and IDEs are how professional software developers write and edit code, says Claude Code product manager Cat Wu, so it made sense to start there. But over time, Wu realized that non-technical people were also using Claude Code, so the team started experimenting with new form factors. 

    “Everywhere that a developer is doing work,” she says, “whether that’s on web and mobile or other tools, we want Claude to be easily accessible there.” 

    Wu admits that Claude Code on web and mobile is still a fairly technical experience. For instance, users must connect to Github in order to create new files, and aren’t able to see a live preview of their work in the app like in Claude.ai, Anthropic’s consumer-facing chat platform. Wu says that her team will bring more visual elements into Claude Code for the web in the coming months to make the experience more intuitive for non-technical vibe coders. 

    Meanwhile, Google has also put significant resources into making vibe coding more accessible. On Tuesday, the company released a big update to Google AI Studio, its AI-assisted coding platform, specifically aimed at vibe coders. In a video, Google AI Studio product lead Logan Kilpatrick explained that in this new ‘vibe coding experience,” users can write out the idea for their app, and then select the specific AI-powered elements that they want to include in their app, like generating images, integrating an AI chatbot, and prioritizing low-latency responses. 

    When vibe coding through the platform, Kilpatrick said, Google AI Studio will generate suggestions for next steps in the form of clickable buttons. The platform also makes it easy for users to deploy their apps to the internet, either through Google Cloud or Github. According to Kilpatrick, Google AI Studio is free to use, but will charge for access to its most advanced AI models. 

    Anthropic and Google aren’t the only tech companies offering vibe coding tools. If you’re looking to get into the vibe coding game, check out recent tools from companies like OpenAI, Replit, and Lovable

    [ad_2]

    Ben Sherry

    Source link

  • Many big names in group of unlikely allies seeking ban, for now, on AI

    [ad_1]

    Prince Harry and his wife Meghan have joined prominent computer scientists, economists, artists, evangelical Christian leaders and American conservative commentators Steve Bannon and Glenn Beck to call for a ban on AI “superintelligence” they say could threaten humanity.

    The letter, released Wednesday by a politically and geographically diverse group of public figures, is squarely aimed at tech giants like Google, OpenAI and Meta Platforms that are racing each other to build a form of artificial intelligence designed to surpass humans at many tasks.

    The 30-word statement says, “We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.”

    In a preamble, the letter notes that AI tools may bring health and prosperity, but alongside those tools, “many leading AI companies have the stated goal of building superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks. This has raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction.”

    Prince Harry added in a personal note that “the future of AI should serve humanity, not replace it. I believe the true test of progress will be not how fast we move, but how wisely we steer. There is no second chance.”

    Signing alongside the Duke of Sussex was his wife Meghan, the Duchess of Sussex.

    Prince Harry and Meghan in August 2024

    CBS News


    “This is not a ban or even a moratorium in the usual sense,” wrote another signatory, Stuart Russell, an AI pioneer and computer science professor at the University of California, Berkeley. “It’s simply a proposal to require adequate safety measures for a technology that, according to its developers, has a significant chance to cause human extinction. Is that too much to ask?”

    Also signing were AI pioneers Yoshua Bengio and Geoffrey Hinton, co-winners of the Turing Award, computer science’s top prize. Hinton also won a Nobel Prize in physics last year. Both have been vocal in bringing attention to the dangers of a technology they helped create.

    But the list also has some surprises, including Bannon and Beck, in an attempt by the letter’s organizers at the nonprofit Future of Life Institute to appeal to President Trump’s Make America Great Again movement even as Mr. Trump’s White House staff has sought to reduce limits to AI development in the U.S.

    Also on the list are Apple co-founder Steve Wozniak; British billionaire Richard Branson; the former Chairman of the U.S. Joint Chiefs of Staff Mike Mullen, who served under Republican and Democratic administrations; and Democratic foreign policy expert Susan Rice, who was national security adviser to President Barack Obama.

    Former Irish President Mary Robinson and several British and European parliamentarians signed, as did actors Stephen Fry and Joseph Gordon-Levitt, and musician will.i.am, who has otherwise embraced AI in music creation.

    Caution urged  

    “Yeah, we want specific AI tools that can help cure diseases, strengthen national security, etc.,” wrote Gordon-Levitt, whose wife Tasha McCauley served on OpenAI’s board of directors before the upheaval that led to CEO Sam Altman’s temporary ouster in 2023. “But does AI also need to imitate humans, groom our kids, turn us all into slop junkies and make zillions of dollars serving ads? Most people don’t want that.”

    The letter is likely to provoke ongoing debates between the AI research community about the likelihood of superhuman AI, the technical paths to reach it and how dangerous it could be.

    “In the past, it’s mostly been the nerds versus the nerds,” said Max Tegmark, president of the Future of Life Institute and a professor at the Massachusetts Institute of Technology. “I feel what we’re really seeing here is how the criticism has gone very mainstream.”

    Labeling is complicating the discourse  

    Confounding the broader debates is that the same companies that are striving toward what some call superintelligence and others call artificial general intelligence, or AGI, are also sometimes inflating the capabilities of their products, which can make them more marketable and have contributed to concerns about an AI bubble. OpenAI was recently met with ridicule from mathematicians and AI scientists when its researcher claimed ChatGPT had figured out unsolved math problems – when what it really did was find and summarize what was already online.

    “There’s a ton of stuff that’s overhyped and you need to be careful as an investor, but that doesn’t change the fact that – zooming out – AI has gone much faster in the last four years than most people predicted,” Tegmark said.

    Tegmark’s group was also behind a March 2023 letter – still in the dawn of a commercial AI boom – that called on tech giants to temporarily pause the development of more powerful AI models. None of the major AI companies heeded that call. And the 2023 letter’s most prominent signatory, Elon Musk, was at the same time quietly founding his own AI startup to compete with those he wanted to take a 6-month pause.

    Asked if he reached out to Musk again this time, Tegmark said he wrote to the CEOs of all major AI developers in the U.S. but didn’t expect them to sign.

    “I really empathize for them, frankly, because they’re so stuck in this race to the bottom that they just feel an irresistible pressure to keep going and not get overtaken by the other guy,” Tegmark said. “I think that’s why it’s so important to stigmatize the race to superintelligence, to the point where the U.S. government just steps in.”

    [ad_2]

    Source link

  • Reid Hoffman and David Sacks Are Feuding on X Over AI and ‘Dirty Tricks’

    [ad_1]

    The discourse over AI regulation is heating up and spilling out on social media. 

    White House crypto and AI czar David Sacks and billionaire LinkedIn co-founder Reid Hoffman exchanged barbs after Hoffman expressed his support for Anthropic’s approach to AI innovation and safety in a thread posted to X on Monday.

    “The leading funder of lawfare and dirty tricks against President Trump wants you to know that ‘Anthropic is one of the good guys.’ Thanks for clarifying that. All we needed to know,” Sacks posted on social media platform X.

    Hoffman, who is also a major Democratic donor and AI optimist, responded minutes later. He accused Sacks of not actually reading the thread in which he advocates for “a light-touch regulatory landscape that prioritizes innovation and enables new players to compete on level playing fields.” He also referenced Microsoft, Google and OpenAI as “trying to deploy AI the right way.” 

    “When you are ready to have a professional conversation about AI’s impact on America, I’m here to chat,” Hoffman wrote. “Also: crying ‘lawfare and dirty tricks’ is particularly rich, given the Trump Administration’s recent actions.”

    In a wide-ranging conversation prior to the social media spat (and on the heels of an event called Entrepreneurs First Demo Day in San Francisco), Hoffman spoke to Inc. about his approach to AI regulation, describing it as “iterative deployment and development,” rather than preemptive, fear-based rulemaking. He compared it to how motor vehicles preceded the introduction and mandate of seatbelts.

    “Let’s limit the regulatory stuff to transparency, monitoring, accountability, to get a good sense of what’s actually going on, and then only impose when we know that there’s something potentially catastrophic,” he says.

    Some critics worry, however, that lawmakers are not informed enough to craft meaningful regulations for technology that is changing as rapidly as AI, whereas others blame regulatory inaction on lobbying and campaign contributions. Hoffman says that he believes frontier AI labs can help govern themselves.

    “When I was on the board of OpenAI, part of what we were doing was trying to make sure all the top labs were talking to each other about how to do safety the right way, but it grew more and more tense with regulators,” he says. 

    “It’d be useful to have some kinds of cross-collaboration on what is good alignment, what is good safety,” he adds.

    Hoffman’s uneasy relationship with the Trump administration precedes the October X feud. In late September, Trump mentioned Hoffman as a possible target of a probe along with George Soros, after a Reuters reporter asked him who he might investigate in connection with domestic terrorism, Reuters reported. Trump was signing a memorandum meant to crack down on domestic terrorism and political violence several days after he signed an executive order designating anti-fascism or “Antifa” a domestic terrorism organization. Both Soros and Hoffman are substantial donors to the Democratic Party, and Hoffman also helped to fund E. Jean Carroll’s lawsuit against the president through a nonprofit, CNBC reported.

    Hoffman tells Inc. that these developments have not changed his politics, although he has been “careful about trying to fund stuff very directly.”

    Hoffman describes himself as “very pro-American society, very pro-American prosperity and business.”

    “As far as I’m aware,” he says, “Antifa is a fictional organization and I certainly would never have deliberately funded anything that would support domestic terrorism.”

    Hoffman also says he has not backed pro-AI super PACs, two of which emerged in one week in September to support AI-friendly politicians regardless of political affiliation, The New York Times reported. Tech titans have also been spotted hobnobbing with the president, including at a September dinner at the White House. Executives including Meta’s Mark Zuckerberg, Apple’s Tim Cook and Microsoft’s Bill Gates reportedly discussed various AI-related investments and educational initiatives, while also praising the president. Hoffman says the fawning “could be a little silly,” but says he believes business leaders do have a role to play in U.S. politics.

    “Especially in democracies, it’s very important for all business leaders to be in collaboration [and] discussion with the elected leaders,” Hoffman says. “Technology sets the drumbeat about what happens with society, what happens with industries and so forth, and so I think that dialog is extremely important.”

    Hoffman has himself co-founded two AI-powered startups in recent years. He co-founded Inflection AI together with Mustafa Suleyman and Karén Simonyan in 2022, to create a more empathetic large language model. The company pivoted in 2024 after Microsoft paid a fee to license its technology and hired away much of its top talent. And earlier this year, he launched a new venture, Manas AI, to leverage AI to cut down on the time and costs inherent to therapeutic drug discovery.

    [ad_2]

    Chloe Aiello

    Source link

  • Samsung is working on XR smart glasses with Warby Parker and Gentle Monster

    [ad_1]

    As part of its Galaxy XR headset presentation, Samsung also briefly teased another wearable product. It’s working in collaboration with two eyewear companies, Warby Parker and Gentle Monster, on AI-powered smart glasses to go up against Meta’s Ray-Ban models, Samsung’s head of customer experience Jay Kim announced at the end of the livestream.

    “We’re also really excited about the AI glasses that we’re currently building together with Google,” Kim said. “We’re working with two of the most forward-thinking brands in eyewear, Warby Parker and Gentle Monster, to introduce new devices that fit into your lifestyle.”

    Samsung will focus on two different markets with those brands, though both will include “cutting-edge” AI features co-developed with Google. With Gentle Monster, it’s developing “fashion-forward” glasses that will likely be aimed at the higher end of the market. The Warby Parker collaboration, meanwhile, will yield eyewear designed for general consumers, probably at a lower price point.

    Samsung only said that the AI glasses will bring “style, comfort and practicality” to everyday life via Android’s XR ecosystem. As we saw in May with Google’s prototype XR smart glasses, it will likely employ a Gemini-powered display that will show notifications and small snippets of info from your apps, like the music you’re listening to or turn-by-turn GPS directions. It should also have a built-in camera, of course, along with speakers and a microphone.

    Design and appearance will also be key, but Samsung has yet to show any images of the upcoming smart glasses and didn’t reveal a release date. However, it will have a tough climb against Meta’s lineup given the Ray-Ban branding and that company’s head start on the technology. Last week, Meta introduced its Ray-Ban Display model that includes a screen for a true extended reality experience.

    [ad_2]

    Steve Dent

    Source link

  • Why the Samsung Galaxy XR can support ‘almost all’ Android apps

    [ad_1]

    The Samsung Galaxy XR is designed to be a showcase for Android XR, Google’s new AR / VR operating system, but unlike competing mixed reality headsets, Google says there will be few limits on the apps the Galaxy XR will actually be able to run. In fact, a Google spokesperson tells Engadget that “almost all Android apps will automatically be made available without any additional development effort.”

    Obviously, Google and Samsung would love deliberately designed spatial experiences for their new hardware, but almost all existing Android apps, regardless if they were made for phones or not, will be considered “Android XR compatible mobile apps” once the headset launches. That means they’ll run in a floating spatial panel that can be moved around the virtual space surrounding you, and per Google’s Android XR developer guidelines, will automatically support core XR input methods like eye and hand tracking, along with the usual suspects like controllers, mice and keyboards. They should also run and look like they would on a smartphone or tablet. “Apps that specify compact sizes show up accordingly and apps that allow for resizing can be resized in XR. These apps do not run in compatibility mode and won’t be letterboxed,” Google says.

    The only apps that won’t make the cut are ones that require features a given Android XR device doesn’t support, like GPS. And in the case of apps that are already updated to work on large screens, or that are “adaptive apps” designed to reflow and change size depending on the Android device they’re running on, things will be even smoother. Google says adaptive design will be expected to be the default going forward, an effort that started with this year’s release of Android 16. “Many APIs restricting size will be ignored on larger screens (which includes Android XR),” Google’s spokesperson said, because the company ultimately wants Android apps to feel responsive whether they’re on a phone, an in-car display or an XR headset.

    Apple tried a similar, but more limited approach with the launch of visionOS and the Vision Pro by letting developers list their iOS and iPadOS apps in the visionOS App Store. The move produced mixed results, and a dearth of real visionOS apps. An app designed with a device in mind is better than one that’s not, but Google does at least appear to have set Android developers up for a slightly smoother ride. Considering the Galaxy XR’s cheaper price when compared to the Vision Pro, they might also have a bigger audience to make apps for, too.

    [ad_2]

    Source link

  • Samsung’s Galaxy XR Mixed Reality Headset Undercuts Apple’s Vision Pro by $1,700

    [ad_1]

    It has been five years since Samsung and Google stopped supporting their respective mobile virtual reality headsets. For a second try, the companies have partnered up with a bolder vision in the mixed reality space, starting with the new Galaxy XR. Announced last year as Project Moohan, it’s the first headset powered by Android XR, a new platform for smart glasses and headsets built on Android and Google’s Gemini assistant from the ground up.

    The Galaxy XR is available today in the US and South Korea for $1,800. (You can finance it for $149 per month for 12 months.) That’s a leap over standard VR headsets like the Meta Quest 3, but a significantly lower price than the $3,499 Vision Pro, which Apple is refreshing this week with the new M5 processor.

    Galactic Vision

    Photograph: Julian Chokkattu

    I was able to demo the headset again last week at a closed-doors media event in New York City held by Samsung, Google, and Qualcomm—the Galaxy XR is powered by Qualcomm’s Snapdragon XR2+ Gen 2 chip—but not much was different from my original hands-on experience last year, which you can read more about here. The official name and price were the two big question marks, but that has now been addressed.

    The Galaxy XR purports to do nearly everything that Apple’s device does. Pop the headset on and you’ll be able to see the room you’re in through the pancake lenses and layer virtual content over it, or whisk yourself off to another world. Your hands are the input (controllers are available as a separate purchase), and it uses eye tracking to see what you want to select. You can access all your favorite apps from the Google Play Store; XR apps will have a “Made for XR” label.

    Samsung’s headset is more plasticky and doesn’t feel as premium as Apple’s Vision Pro—I noticed the tethered battery pack on a demo unit looked well-worn with fingerprint smudges on the coating. But this general construction makes it feel significantly lighter to wear. I wasn’t able to try it for a long period, but it felt comfortable, with the only issue being a sweaty brow after a 25-minute bout with it on. The headset was warm at the top, but the battery pack remained relatively cool. Speaking of, the battery lasts 2 hours or 2.5 hours if you’re purely watching video. That’s on par with the original Vision Pro, though the M5 version extends it to 2.5 with mixed use.

    [ad_2]

    Julian Chokkattu

    Source link

  • Samsung Galaxy XR hands-on: A smarter, more open take on Apple’s Vision Pro for half the price

    [ad_1]

    Apple’s Vision Pro was meant to usher in a new era for headsets. However, its high price and somewhat limited utility resulted in what may be the company’s biggest flop in years. Now it’s time for Samsung to give things a go with the Galaxy XR. It’s a fresh take on modern mixed reality goggles developed through deep partnerships with Qualcomm and Google and it attempts to address some of the Vision Pro’s biggest shortcomings.

    The hardware

    While both Apple and Samsung’s headsets have a lot of similarities (like their basic design and support for features such as hand and eye tracking), there are also some very important differences. First, at $1,800, the Galaxy XR is essentially half the price of the Vision Pro (including the new M5-powered model). Second, instead of Apple’s homegrown OS, Samsung’s headset is the first to run Google’s new Android XR platform, which combines a lot of familiar elements from its mobile counterpart but with a bigger emphasis on AI and Gemini-based voice controls. And third, because Samsung relied more on partners like Google and Qualcomm, the Galaxy XR feels like it’s built around a larger, more open ecosystem that plays nicely with a wider range of third-party devices and software.

    The Galaxy XR fundamentally doesn’t look that much different from the Vision Pro. It features a large visor in front with an assortment of 13 different exterior sensors to support inside-out tracking, passthrough vision and hand recognition. There are some additional sensors inside for eye and face tracking. There’s also a connector for the wire that leads to its external clip-on battery pack alongside built-in speakers with spatial audio. The one big departure is that unlike the Vision Pro, the Galaxy XR doesn’t have an outward-facing display, so it won’t be able to project your face onto the outside of the headset, which is just fine by me.

    Sam Rutherford for Engadget

    However, the devil is in the details because while the original Vision Pro weighed between 600 and 650 grams (around 1.3 to 1.4 pounds) depending on the configuration (not including its battery pack), the Galaxy XR is significantly lighter at 545 grams (1.2 pounds). And that’s before you consider the new M5 Vision Pro, which has somehow gone backwards by being even heavier at 750-800 grams (around 1.6 pounds). Furthermore, it seems Samsung learned a lot from its rivals by including a much larger and thicker head cushion that helps distribute the weight of the headset more evenly. Granted, during a longer session, I still noticed a bit of pressure and felt relief after taking off the Galaxy XR, but it’s nothing like the Vision Pro, which in my experience gets uncomfortable almost immediately. Finally, around back, there’s a simple strap with a knob that you can twist to tighten or loosen the headband as necessary. So even without extra support running across the top of your head, getting in and out of the Galaxy XR is much easier and comfier than the Vision Pro.

    The side of the Galaxy XR features a connector for its battery pack and built-in spatial audio speakers.
    Sam Rutherford for Engadget

    On the inside, the Galaxy XR is powered by Qualcomm’s Snapdragon XR2+ Gen 2 chip with dual micro OLED displays that deliver 4K resolution (3,552 x 3,840) to each eye at up to 90Hz. I wish Samsung was able to go up to a 120Hz refresh rate like on the Vision Pro, but considering the Galaxy XR’s slightly higher overall resolution, I’m not that bothered. And I must say, the image quality from this headset is seriously sharp. It’s even better than Apple’s goggles and it might be the best I’ve ever used, particularly outside of $10,000+ enterprise-only setups. Once again, when you consider that this thing costs half the price of a Vision Pro, this headset feels like a real accomplishment by Samsung to the point where I wouldn’t be surprised if the company is losing money on every unit it sells.

    In terms of longevity, Samsung says that for general use the Galaxy XR should last around two hours. If you’re only watching videos though, that figure is more like two and a half. Thankfully, if you do need to be in mixed reality for longer, you can charge the headset while it’s being used. As for security, the Galaxy XR uses iris recognition to skip traditional passwords, which is nice.

    The platform: Android XR

    Sometimes, trying out a new software platform can be a little jarring. But that’s not really the case for Android XR, which shouldn’t present much of a learning curve for anyone who has used other headsets or Google’s ubiquitous mobile OS. After putting the goggles on, you can summon a home menu with an app launcher by facing your palm up and touching your index finger and thumb together. From there, you can open apps and menus by moving your hands and pinching icons or rearranging virtual windows by grabbing the anchor point along the bottom and putting them where you want.

    Even without a top strap, the Galaxy XR is surprisingly comfortable thanks to a larger forehead cushion and less weight than the Apple Vision Pro.
    Sam Rutherford for Engadget

    Notably, while there is a growing number of new apps made specifically for XR, you still get access to all of your standard Android titles. Those include Google Photos, Google Maps and Youtube, all of which I got a chance to play around with during a 25-minute demo. In Photos, you can browse your pictures normally. However, to take advantage of the Galaxy XR’s hardware, Google created a feature that allows the app to convert standard flat images (with help from the cloud) into immersive ones. While the effect isn’t true 3D, it adds distinct foreground, midground and background layers to images in a way that makes viewing your photo roll just a bit more interesting.

    In Maps, you start out with a view of the world before using hand gestures to move and zoom in wherever you want or voice commands to laser in on a specific location. The neat new trick for this app is that if you find bubbles over things like restaurants and stores, you can click those to be transported inside those businesses, where Android XR will stitch together 2D photos to create a simulated 3D environment that you can move and walk around in. Granted, this doesn’t have a ton of practical use for most folks unless you want to take a virtual tour of something like a wedding venue. But, the tech is impressive nonetheless.

    This connector is where you connect the Galaxy XR's battery pack. Runtime is expected to be about two hours, or two and a half if you are only watching videos.
    Sam Rutherford for Engadget

    Finally in the YouTube app, the Galaxy XR did a great job of making standard 360 videos look even better. While quality will always depend on the gear that captured the content, viewing spatial clips was a great way to show off its resolution and image quality. Google says it will also put a new tab on the app to make finding 360 videos easier, though you can always watch the billions of standard flat videos as well.

    Interestingly, you can use and navigate the Galaxy XR entirely with hand gestures, but voice commands (via Gemini) are also a major part of the Android XR platform. Because the goggles sit on your head, unlike with mobile devices, there’s no need to use a wake word every time you want to do something. You just talk and Gemini listens (though you can choose to disable this behavior if you prefer), so this makes voice interactions feel a lot more natural. Because Gemini can also do things like adjust settings or organize all the apps you have open, in addition to answering questions, it feels like Google is starting to deliver on some of those Star Trek moments where you can simply ask the computer to do something and it just happens. Yes, it’s still very early, but as a platform, Android XR feels much more like a virtual playground than VisionOS does at the moment.

    Other features

    The back of the Samsung Galaxy XR headset has a handy knob for quickly loosening or tightening its headband.
    Sam Rutherford for Engadget

    While I didn’t get to test these out myself, there are some other important features worth mentioning. In addition to apps, you can also play your standard selection of Android games like Stardew Valley or connect the headset to your PC (like with Steam Link) to play full desktop titles. Furthermore, I was told that the Galaxy XR can be tethered to a computer and used like a traditional VR headset. And while Samsung is making optional wireless controllers for the Galaxy XR (and a big carrying case), you may not need them at all as you’ll also have the ability to pair the goggles with typical Bluetooth-based gamepads along with wireless mice and keyboards.

    Google also says it’s working on a new system called Likenesses that can create personalized avatars for use in video calls and meetings that use data from interior sensors to deliver more realistic expressions. Additionally, you’ll be able to use tools like Veo3 to make AI-generated videos while providing prompts using your voice. But this is just scratching the surface of the Galaxy XR’s capabilities and I want to use this thing more before offering a final verdict.

    Early thoughts

    Engadget Senior Reporter Sam Rutherford wearing the Samsung Galaxy XR headset.
    Sam Rutherford for Engadget

    In many ways, the Galaxy XR looks and feels like a flagship mixed reality headset in the same vein as the Vision Pro, but for the Android crowd (and Windows users to some extent as well). On top of that, Google has done some interesting things with Android XR to make it feel like there’s a much wider range of content and software to view and use. In many ways, the addition of a dedicated AI assistant in Gemini and voice controls feels much more impactful on goggles than a phone because you can’t always count on having physical inputs like a mouse or keyboard. And with the Galaxy XR being half the price of the Vision Pro, Samsung and Google have done a lot to address some of the most glaring issues with Apple’s rival.

    In case the price drop wasn’t enough, it feels like all the companies involved are doing as much as possible to sweeten the deal. I actually started laughing when I first heard all the discounts and free subscriptions that come with the headset. That’s because in addition to the goggles themselves, every Galaxy XR will come with what’s being called the Explorer Pack: 12 months of access to Google AI Pro, 12 months of YouTube Premium (which itself includes YouTube Music), 12 months of Google Play Pass, 12 Months of NBA League Pass and a bundle of other custom XR content and apps. So on top of a slick design, top-tier optics and a new platform, Google and Samsung are basically tossing a kitchen sink of apps and memberships in with the headset.

    The Galaxy XR will be available with an optional carrying case and wireless controllers.
    Sam Rutherford for Engadget

    My only reservation is that when it comes to mass adoption, I think smartglasses have supplanted headsets as the next big mainstream play. Granted, there is a lot of technology and software shared between both categories of devices (Google has already teased upcoming Android XR smartglasses) that should allow Samsung or Google to pivot more easily down the line. But the idea that in the future there will be a headset in every home seems less likely every day. Still, as a showcase for the potential of mixed reality and high-end optics, the Galaxy XR is an exciting piece of tech.

    The Samsung Galaxy XR is available now for $1,800 on Samsung.com.

    Image for the mini product module

    [ad_2]

    Sam Rutherford

    Source link

  • How to order the Samsung Galaxy XR headset

    [ad_1]

    Samsung’s take on the Vision Pro is here — and you can already order it. Costing just over half as much as Apple’s reality machine, the Galaxy XR has a 4K micro-OLED screen and a 100-degree horizontal field of view. The $1,800 mixed reality headset is available now for pre-order on Samsung’s website.

    The Galaxy XR isn’t only a Samsung product. The company developed the long-rumored headset alongside Google and Qualcomm. It’s the first Android XR product, a line that will eventually include AI glasses “and beyond.” You can read more about the headset and its ecosystem in Engadget’s news coverage.

    Given Google’s connection to the Galaxy XR, it isn’t too surprising that the company has bonuses for early orders. If you buy the headset before the end of 2025, you’ll get “The Explorer Pack.” That includes a year of access to Google AI Pro, YouTube Premium and Google Play Pass. Also included until the end of the year is the “XR Pack.” This adds three months of YouTube TV, a year of NBA League Pass, NFL Pro Era, Adobe’s Project Pulsar, Asteroid and Calm.

    You can order the Galaxy XR now from Samsung’s website and in Samsung Experience Stores. The headset costs $1,800. An optional Galaxy XR Controller costs $250. And somehow, the official Galaxy XR travel case also costs $250, which is — yikes — a lot. Perhaps consider waiting for third-party alternatives on the case front.

    Samsung is offering a 24-month financing plan for the headset ($75.01 monthly) on its website. Meanwhile, Samsung’s stores have that plan as well as a 12-month one ($149 monthly).

    [ad_2]

    Will Shanklin

    Source link