ReportWire

Tag: gear

  • South Korea aims to bolster local chip production with $19 billion of support

    South Korea aims to bolster local chip production with $19 billion of support

    [ad_1]

    South Korea is the latest country to support its local industry in a significant fashion. It’s trying to stay competitive with the likes of the US, China and Taiwan with the help of a 26 trillion won ($19 billion) support package. The country will extend tax breaks that were set to expire at the end of this year and provide financial support to chipmakers through the state-run Korea Development Bank, as reports.

    Amid large demand for chips to power AI systems and other computing needs, South Korea saw exports of semiconductors rise 56 percent in April compared with a year earlier. That’s despite fierce competition from the likes of Intel and Taiwan Semiconductor Manufacturing Co. (TMSC). SK Hynix said it would bolster its AI chip manufacturing capacity in South Korea with an extra $14.6 billion investment, while Samsung replaced the leader of its semiconductor division to try and become more competitive.

    South Korea’s moves could help it keep pace with the US, which has been trying to ramp up domestic chip production to reduce its reliance on imports. Through the CHIPS Act, the US is subsidizing manufacturers such as , and . As it happens, one of the largest recipients of a CHIPS Act subsidy is , which is receiving up to $6.4 billion in federal funding for a new semiconductor plant in Texas.

    This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.

    [ad_2]

    Kris Holt

    Source link

  • Panasonic S9 hands-on: A powerful creator camera with a patented LUT simulation button

    Panasonic S9 hands-on: A powerful creator camera with a patented LUT simulation button

    [ad_1]

    Continuous autofocus for photos works well, though it’s still behind Canon and Sony. The AI is good at locking onto human faces, bodies and eyes, and also works with animals, cars and motorcycles. It’s not a sports or wildlife camera by any means, but the majority of my photos were in focus.

    Like the S5 II, the S9 shoots 14-bit RAW images in single-shot mode but drops to 12-bit RAW for burst shooting. As this was a pre-production camera without the final firmware, I was unable to test RAW quality, but I’d expect it to be in line with the Panasonic S5 II.

    Photo quality otherwise is good from what I’ve seen so far, with realistic colors and skin tones. In low light, I wouldn’t go past about ISO 6400 as noise can get bad compared to cameras with similar sensors, like Nikon’s Z6 II.

    Panasonic S9 mirrorless camera sample images

    Steve Dent for Engadget

    I liked the S9 as a street photography camera, as it’s discreet, silent and lightweight. However, the new $200 pancake lens that helps make it so light is manual focus only and has just one f/8 aperture setting which may turn off buyers. On top of that, with no electronics in the lens, the zoom window doesn’t pop up to aid focus. As such, you need to rely on the focus peaking assist.

    As a video camera, the S9 is generally excellent, but has some pluses and minuses compared to the ZV-E1. On the positive side, the higher-resolution sensor allows for up to 6.2K 30p or supersampled 4K 30p video using the entire sensor width. It also supports full readout 3:2 capture that makes vertical video easier to shoot.

    4K 60p requires an APS-C crop, and to get 120 fps video you need to drop down to 1080p. Like the S5 II, it supports a number of anamorphic formats with supported lenses.

    Panasonic S9 mirrorless camera hands-onPanasonic S9 mirrorless camera hands-on

    Steve Dent for Engadget

    The ZV-E1 has half the resolution, so video isn’t quite as sharp, but Sony’s camera can shoot 4K at up to 120 fps and rolling shutter isn’t nearly as bad.

    One potential issue with this camera for creators is the limited continuous recording time, which is capped at just 10 minutes at 6.2K and 15 minutes at 4K. That’s due to the small size and lack of a fan, but you can start recording again immediately after it stops — so this would mainly affect event shooters needing to do long takes. We’ll see if these recording times remain in the final firmware.

    The S9 has excellent in-body stabilization, with up to 6.5 stops using supported lenses. Like the S5 II, it offers a boost mode that’s best for handheld shooting with limited movement, and an electronic mode with a 1.4x crop in the “high” setting.

    Panasonic S9 hands-on: A powerful creator camera with a patented LUT simulation buttonPanasonic S9 hands-on: A powerful creator camera with a patented LUT simulation button

    Steve Dent for Engadget

    The latter can smooth out footsteps and other jolts well enough to replace a gimbal in a pinch. It does a better job than the ZV-E1 with abrupt movements, but the latter crops in slightly less at 1.3x.

    Autofocus mostly keeps subjects sharp, but it can occasionally lag. The AI-powered face-tracking stays locked on a subject’s eyes and face, though sometimes the autofocus itself doesn’t keep up. However, these could be pre-production issues.

    With the same sensor as the S5 II, quality is very similar. Video is sharp and colors are realistic, with pleasing skin tones. It’s not quite as good in low light as other 24MP cameras like the Canon R6 II, with noise starting to become noticeable at ISO 6400. The ZV-E1, in comparison, can shoot clean video at ISO 12800 and even beyond.

    Panasonic S9 mirrorless camera hands-onPanasonic S9 mirrorless camera hands-on

    Steve Dent for Engadget

    I enjoy shooting Panasonic V-log video as it’s easy to adjust in post and offers excellent dynamic range. It’s one big reason Panasonic cameras are so popular with professional videographers, so it’s nice to see this on a less expensive model.

    So what about the new LUT feature? To get the most out of it, you have to go into the new Lumix Lab app. Panasonic has a handful of presets to get you started, or you can load custom LUTs from a variety of creators. You can also make your own in an editing program like DaVinci Resolve.

    Panasonic S9 mirrorless camera hands-onPanasonic S9 mirrorless camera hands-on

    Steve Dent for Engadget

    Applying the LUT bakes the look into the video, which makes it hard to adjust it later on. However, you can shoot standard or V-Log footage and use the LUT as a preview, then apply that same look in post without being locked in.

    The LUT button is a clever idea, as it allows creators to create cool shots without the need to futz around in post. However, many may not even be familiar with the term “LUT,” so Panasonic has an uphill battle selling the benefits. By comparison, many influencers understand the advantages of Fujifilm’s simulations.

    Panasonic S9 mirrorless camera hands-onPanasonic S9 mirrorless camera hands-on

    Steve Dent for Engadget

    With the S9, Panasonic is trying to attract influencers with a small, stylish camera that makes it easy to create interesting video looks quickly. At the same time, it has nearly all the capabilities of higher-end models like the S5 II.

    It does have some flaws that make it a hard sell for photographers. And I’m concerned about the $1,500 price tag, as that’s just a bit less than the S5 II, which has an EVF, mechanical shutter, extra card slot, better ergonomics and more.

    So far, it comes out well against the ZV-E1, though. I like the extra resolution and sharpness, and it has superior stabilization. It’s also cheaper, but only by about $300 at the moment. It looks like a good first try and I have a few quibbles, but I’ll know more once I’m able to test the production version.

    [ad_2]

    Steve Dent

    Source link

  • Scarlett Johansson says OpenAI used her likeness without permission for its ‘Sky’ voice assistant

    Scarlett Johansson says OpenAI used her likeness without permission for its ‘Sky’ voice assistant

    [ad_1]

    Actor Scarlett Johansson has accused OpenAI of copying her voice for one of the voice assisstants in ChatGPT despite denying the company permission to do so. Johansson’s statement on Monday came hours after OpenAI said that the company would no longer use the voice in ChatGPT but did not provide a reason why.

    “Last September, I received an offer from Sam Altman, who wanted to hire me to voice the current ChatGPT 4.0 system,” Johansson wrote in the statement that was first shared with NPR. “He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and AI. He said he felt that my voice would be comforting to people.” Johansson added that she declined the offer after “much consideration and for personal reasons,” but when OpenAI demoed GPT-4o, the company’s latest large language model last week, “my friends, family, and the general public all noted how much the newest system named ’Sky’ sounded like me.”

    When Johansson saw OpenAI’s newest demo, she said she was “shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mind that my closest friends and news outlets could not tell the difference.” She also revealed that Altman had contacted her agent just two days before the company revealed GPT-4o and asked her to reconsider, but released the system anyway before she had a chance to respond.

    “The voice of Sky is not Scarlett Johansson’s, and it was never intended to resemble hers,” an OpenAI spokesperson said in a statement sent to Engadget that the company attributed to Altman, OpenAI’s co-founder and CEO. “We cast the voice actor behind Sky’s voice before any outreach to Ms. Johansson. Out of respect for Ms. Johansson, we have paused using Sky’s voice in our products. We are sorry to Ms. Johansson that we didn’t communicate better.”

    Even though “Sky” has been one of the voice assisstants in ChatGPT since September 2023, GPT-4o, which the company announced last week, takes things a step further. The company said that the new model is closer to “much more natural human-computer interaction” and demoed its executives having nearly human-like conversations with the voice assistant in ChatGPT. This invited comparisons to Samantha, the virtual voice assistant played by Johansson in the 2013 movie Her who has an intimate relationship with a human being. Shortly after the event, Altman tweeted a single word — “her” — in an apparent reference to the film.

    On Monday, OpenAI said that it was pausing the use of “Sky” in ChatGPT and released a lengthy post revealing how the company hired professional voice actors to create its own virtual assistants, and denying any similarities with Johansson’s voice.

    “We believe that AI voices should not deliberately mimic a celebrity’s distinctive voice — Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice,” OpenAI wrote and added that each of its performers, who it declined to name for privacy reasons, was paid “above top-of-market rates, and this will continue for as long as their voices are used in our products.”

    This move, Johansson said in her statement, only came after she hired legal counsel who wrote two letters to Altman and OpenAI asking for an explanation. “In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity,” Johansson wrote. “I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected.”

    Update, May 20 2024, 9:09 PM ET: This story has been updated to include a statement from OpenAI.

    [ad_2]

    Pranav Dixit

    Source link

  • iFixit’s teardown of the new M4 iPad Pro reveals an easier-to-replace battery

    iFixit’s teardown of the new M4 iPad Pro reveals an easier-to-replace battery

    [ad_1]

    The new 13-inch M4 iPad Pro is really, really thin. That’s inevitably going to make certain aspects of repairing the device even more difficult, which iFixit confirmed in a teardown published this weekend. But it does shine in one area — when it comes to replacing the battery, Apple made some seriously helpful changes.

    “For the first time in an iPad Pro, we’re able to remove the battery immediately after removing the screen,” Teardown Tech Shahram Mokhtari wrote in a blog post. Mokhtari notes that “immediately is relative,” as there are still some screws and brackets to remove before the battery can be taken out, and the video documenting the process shows it takes a bit of work to get to the pull tabs beneath the batteries, but the new setup still shaves hours off the process compared to earlier models.

    “The fact that you can remove the battery without having to remove every major component inside this device is still a huge win for repairability,” Mokhtari says in the video. “It’s a massive improvement over the previous generation.” Everything else, on the other hand, is going to be pretty tricky to repair without causing damage. “From the daughterboard to the speakers and coax cables, we found a whole bunch of stuff that’s glued down because there just isn’t enough space for screws.”

    And, the teardown shows the new Apple Pencil Pro is a repairability nightmare. Mokhtari — who got cut by the Pencil while trying to get to its insides — called it “a disposable piece of crap once the battery dies.”

    [ad_2]

    Cheyenne MacDonald

    Source link

  • Apple will reportedly offer higher trade-in credit for old iPhones for the next two weeks

    Apple will reportedly offer higher trade-in credit for old iPhones for the next two weeks

    [ad_1]

    It might be a good time to finally upgrade your iPhone if you’ve been hanging onto an older model — according to Bloomberg’s Mark Gurman, Apple will be offering a little more than usual for some trade-ins starting next week in the US and Canada. The company itself hasn’t said anything about the promotion, but according to Gurman, it’ll be offered in-store to customers who’ll be using the credit toward any model in the iPhone 15 lineup. This will reportedly be in effect starting this Monday and last until June 3.

    Apple lists trade-in values on its website for all iPhone models going back to the iPhone 7. Something that old currently goes for something in the ballpark of $50, while a more recent model like the year-and-a-half-old iPhone 14 Pro Max has an estimated trade-in value of up to $630. Of course, the online estimates aren’t always what you end up getting, but it gives you an idea. Since Apple hasn’t said anything about a temporary value boost, it’s unclear by how much these numbers may go up.

    [ad_2]

    Cheyenne MacDonald

    Source link

  • How to watch the Microsoft Build 2024 keynote live on May 21

    How to watch the Microsoft Build 2024 keynote live on May 21

    [ad_1]

    Springtime means it’s keynote season in the tech world, and in 2024, that means “time to show off your AI bona fides.” Google and OpenAI have already revealed big new upgrades to Gemini and ChatGPT this month, and now it’s time for Microsoft Build. The tech giant’s annual developer conference kicks off with a keynote slated for Tuesday, May 21 at 12 PM ET/9 AM PT, and you can watch the entire event live on YouTube (which is also embedded below) and at Microsoft’s site (registration required). What about that Microsoft Surface event you may have heard about? Well, that’s actually happening a day earlier: Monday, May 20. Confused? Don’t worry, here’s the tl;dr version of what to expect, summarized from our more in-depth What to expect from Microsoft Build 2024: The Surface event, Windows 11 and AI.

    One day before the official Build keynote, Microsoft is hosting a more intimate event for journalists at which it plans to reveal its “AI vision across hardware and software.” That event won’t be livestreamed, but Engadget will have full coverage as it unfolds.

    The rumor mill strongly suggests that we’ll see new consumer-targeted Surface PCs. And unlike the enterprise-centric models like the Surface Pro 10 and Surface Laptop 6 introduced back in March, these new models may be powered by updated Qualcomm Snapdragon chips – Arm chips that run cooler and offer far better battery life than their Intel and AMD equivalents, but often at the expense of reduced app compatibility and processing speeds.

    The thought is that Microsoft is following the template that its fellow tech giants have demonstrated this season: get the hardware announcements out of the way first, clearing the runway for an all-AI showcase at the developer conference. That’s what happened with Apple and Google in recent weeks, as they respectively revealed new iPads weeks before the WWDC event in June, and a new Pixel 8a phone in the days leading up to Google I/O.

    What’s that mean for Tuesday? Last year’s Build announcements give you the general flavor: Microsoft’s Copilot AI (possibly with more impressive OpenAI-powered smarts) integrated into even more of Microsoft’s DNA, likely both at the device level (Windows) all the way up to the company’s massive cloud infrastructure.

    While much of Tuesday’s news will be through the prism of Microsoft’s developer community, we’re looking forward to giving you the big picture on what it all means for end users – and how it dovetails with the hardware announcements we expect to hit on Monday. Stay tuned.

    This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.

    [ad_2]

    John Falcone

    Source link

  • OpenAI strikes deal to put Reddit posts in ChatGPT

    OpenAI strikes deal to put Reddit posts in ChatGPT

    [ad_1]

    OpenAI and Reddit announced a partnership on Thursday that will allow OpenAI to surface Reddit discussions in ChatGPT and for Reddit to bring AI-powered features to its users. The partnership will “enable OpenAI’s tools to better understand and showcase Reddit content, especially on recent topics,” both companies said in a . As part of the agreement, OpenAI will also become an advertising partner on Reddit, which means that it will run ads on the platform.

    The deal is similar to the one that Reddit in February, and which is worth $60 million. A Reddit spokesperson declined to disclose the terms of the OpenAI deal to Engadget and OpenAI did not respond to a request for comment.

    OpenAI has been increasingly striking partnerships with publishers to get data to continue training its AI models. In the last few weeks alone, the company has with the Financial Times and Dotdash Meredith. Last year, it also with German publisher Axel Springer to train its models on news from Politico and Business Insider in the US and Bild and Die Welt in Germany.

    Under the new arrangement, OpenAI will get access to Reddit’s Data API, which, the company said, will provide it with “real time, structured, and unique content from Reddit.” It’s not clear what AI-powered features Reddit will build into its platform as a result of the partnership. A Reddit spokesperson declined to comment.

    Last year, getting access to Reddit’s data, a rich source of real time, human generated, and often high-quality information, became a contentious issue after the company announced that it would start charging developers to use its API. As a result, dozens of third-party Reddit clients were forced to and thousands of subreddits went dark in protest. At the time, Reddit stood its ground and that large AI companies were scraping its data with no payment. Since then, Reddit has been monetizing its data by striking such deals with Google and OpenAI, whose progress in training their AI models depends on having access to it.

    [ad_2]

    Pranav Dixit

    Source link

  • Solo Stove Memorial Day sales cut up to $280 off Pi Ultimate pizza oven bundles

    Solo Stove Memorial Day sales cut up to $280 off Pi Ultimate pizza oven bundles

    [ad_1]

    When a good deal hits your eye like a big pizza pie, it may be a great day. Maybe more so than usual in this case if you’re in the market for a pizza oven, as some Solo Stove bundles have been discounted ahead of Memorial Day. The company is running a site-wide sale with up to 30 percent off everything, including the Pi Ultimate bundle that includes an oven with support for both gas and wood sources. That has dropped by $280 to $600.

    Solo Stove

    A bundle of the Solo Stove Pi and a whole bunch of accessories is $270 off for the wood burning-only model.

    $480 at Solo Stove

    Meanwhile, a bundle with the Pi model that supports wood burning only is $270 off. That means it can be yours for $480. On the downside, both models are showing estimated shipping dates of June 3 at the time of writing, so likely you won’t get your oven in time for Memorial Day weekend, sadly.

    That said, snapping one up will prepare you for a summer of delicious pies. The Solo Stove Pi is one of our top picks for the best multifuel outdoor pizza oven, behind the more expensive Ooni Karu 16.

    The Pi has an open-front design and it’s made out of stainless steel. It can reach temperatures of up to 850 degrees Fahrenheit when burning wood and 900 degrees when using gas, according to Solo Stove. That works out to stone temperatures of 750 and 800 degrees, respectively.

    The bundles include all kinds of useful accessories, including a stand, bamboo and stainless steel peels, turner, thermometer, silicon mat, pizza cutter and shelter for protection from the elements. One thing the bundle does not include, unfortunately, is a gas burner for the dual fuel model. You’ll need to buy that separately for $120.

    Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

    [ad_2]

    Kris Holt

    Source link

  • Threads search will finally be usable with ‘recent’ tab rollout

    Threads search will finally be usable with ‘recent’ tab rollout

    [ad_1]

    Threads is inching closer to becoming an actually useful source for real-time news and updates. The app is finally rolling out the ability to search posts in order of recency, after the feature last month.

    “In an effort to make it easier to find timely, relevant content on Threads, we’re introducing a Recent tab for your searches,” Instagram’s Adam Mosseri wrote in . “Search results here are still evaluated for quality, but you can now see them in chronological order.”

    The change has been a long requested one from users hoping Meta’s app will one day be a source of breaking news and real-time information the way that Twitter historically functioned. Being able to search for topics and keywords and find the most recent results is key to finding up-to-date details and commentary about breaking news, sports and anything else happening in real time.

    On the other hand, Meta has also made it clear that it would prefer “news” to not be what Threads is known for. Mosseri has said he doesn’t want to “encourage” hard news on Threads and the company actively political content. Threads’ default “for you” algorithm is also known for surfacing days-old posts, random personal stories and other content that’s not exactly timely.

    It’s also worth pointing out that Threads’ new recency filter in search is not the same as the “latest” search filter on X. As Mosseri noted in his post, Meta still hides an unknown number of posts in search results that have been “evaluated for quality,” so Threads search will never surface all of the posts containing your search terms. But being able to at least find posts that aren’t a few days old should make looking for timely information a lot less frustrating.

    [ad_2]

    Karissa Bell

    Source link

  • Threads gets its own fact-checking program

    Threads gets its own fact-checking program

    [ad_1]

    This might come as a shock to you but the things people put on social media aren’t always truthful — really blew your mind there, right? Due to this, it can be challenging for people to know what’s real without context or expertise in a specific area. That’s part of why many platforms use a fact-checking team to keep an eye (often more so look like they’re keeping an eye) on what’s getting shared. Now, Threads is getting its own fact-checking program, Adam Mosseri, head of Instagram and de-facto person in charge at Threads, announced. He first shared the company’s plans to do so in December.

    Mosseri stated that Threads “recently” made it so that Meta’s third-party fact-checkers could review and rate any inaccurate content on the platform. Before the shift, Meta was having fact-checks conducted on Facebook and Instagram and then matching “near-identical false content” that users shared on Threads. However, there’s no indication of exactly when the program started or if it’s global.

    Then there’s the matter of seeing how effective it really can be. Facebook and Instagram already had these dedicated fact-checkers, yet misinformation has run rampant across the platforms. Ahead of the 2024 Presidential election — and as ongoing elections and conflicts happen worldwide — is it too much to ask for some hardcore fact-checking from social media companies?

    [ad_2]

    Sarah Fielding

    Source link

  • Google Project Astra hands-on: Full of potential, but it’s going to be a while

    Google Project Astra hands-on: Full of potential, but it’s going to be a while

    [ad_1]

    At I/O 2024, Google’s teaser for gave us a glimpse at where AI assistants are going in the future. It’s a multi-modal feature that combines the smarts of Gemini with the kind of image recognition abilities you get in Google Lens, as well as powerful natural language responses. However, while the promo video was slick, after getting to try it out in person, it’s clear there’s a long way to go before something like Astra lands on your phone. So here are three takeaways from our first experience with Google’s next-gen AI.

    Sam’s take:

    Currently, most people interact with digital assistants using their voice, so right away Astra’s multi-modality (i.e. using sight and sound in addition to text/speech) to communicate with an AI is relatively novel. In theory, it allows computer-based entities to work and behave more like a real assistant or agent – which was one of Google’s big buzzwords for the show – instead of something more robotic that simply responds to spoken commands.

    The first project Astra demo we tried used a large touchscreen connected to a downward-facing camera.

    Photo by Sam Rutherford/Engadget

    In our demo, we had the option of asking Astra to tell a story based on some objects we placed in front of camera, after which it told us a lovely tale about a dinosaur and its trusty baguette trying to escape an ominous red light. It was fun and the tale was cute, and the AI worked about as well as you would expect. But at the same time, it was far from the seemingly all-knowing assistant we saw in Google’s teaser. And aside from maybe entertaining a child with an original bedtime story, it didn’t feel like Astra was doing as much with the info as you might want.

    Then my colleague Karissa drew a bucolic scene on a touchscreen, at which point Astra correctly identified the flower and sun she painted. But the most engaging demo was when we circled back for a second go with Astra running on a Pixel 8 Pro. This allowed us to point its cameras at a collection of objects while it tracked and remembered each one’s location. It was even smart enough to recognize my clothing and where I had stashed my sunglasses even though these objects were not originally part of the demo.

    In some ways, our experience highlighted the potential highs and lows of AI. Just the ability for a digital assistant to tell you where you might have left your keys or how many apples were in your fruit bowl before you left for the grocery store could help you save some real time. But after talking to some of the researchers behind Astra, there are still a lot of hurdles to overcome.

    An AI-generated story about a dinosaur and a baguette created by Google's Project AstraAn AI-generated story about a dinosaur and a baguette created by Google's Project Astra

    Photo by Sam Rutherford/Engadget

    Unlike a lot of Google’s recent AI features, Astra (which is described by Google as a “research preview”) still needs help from the cloud instead of being able to run on-device. And while it does support some level of object permanence, those “memories” only last for a single session, which currently only spans a few minutes. And even if Astra could remember things for longer, there are things like storage and latency to consider, because for every object Astra recalls, you risk slowing down the AI, resulting in a more stilted experience. So while it’s clear Astra has a lot of potential, my excitement was weighed down with the knowledge that it will be some time before we can get more full-feature functionality.

    Karissa’s take:

    Of all the generative AI advancements, multimodal AI has been the one I’m most intrigued by. As powerful as the latest models are, I have a hard time getting excited for iterative updates to text-based chatbots. But the idea of AI that can recognize and respond to queries about your surroundings in real-time feels like something out of a sci-fi movie. It also gives a much clearer sense of how the latest wave of AI advancements will find their way into new devices like smart glasses.

    Google offered a hint of that with Project Astra, which may one day have a glasses component, but for now is mostly experimental (the video during the I/O keynote were apparently a “research prototype.”) In person, though, Project Astra didn’t exactly feel like something out of sci-fi flick.

    During a demo at Google I/O, Project Astra was able to remember the position of objects seen by a phone's camera. During a demo at Google I/O, Project Astra was able to remember the position of objects seen by a phone's camera.

    Photo by Sam Rutherford/Engadget

    It was able to accurately recognize objects that had been placed around the room and respond to nuanced questions about them, like “which of these toys should a 2-year-old play with.” It could recognize what was in my doodle and make up stories about different toys we showed it.

    But most of Astra’s capabilities seemed on-par with what Meta has available with its smart glasses. Meta’s multimodal AI can also recognize your surroundings and do a bit of creative writing on your behalf. And while Meta also bills the features as experimental, they are at least broadly available.

    The Astra feature that may set Google’s approach apart is the fact that it has a built-in “memory.” After scanning a bunch of objects, it could still “remember” where specific items were placed. For now, it seems Astra’s memory is limited to a relatively short window of time, but members of the research team told us that it could theoretically be expanded. That would obviously open up even more possibilities for the tech, making Astra seem more like an actual assistant. I don’t need to know where I left my glasses 30 seconds ago, but if you could remember where I left them last night, that would actually feel like sci-fi come to life.

    But, like so much of generative AI, the most exciting possibilities are the ones that haven’t quite happened yet. Astra might get there eventually, but right now it feels like Google still has a lot of work to do to get there.

    Catch up on all the news from Google I/O 2024 right here!

    [ad_2]

    Sam Rutherford,Karissa Bell

    Source link

  • test-section

    test-section

    [ad_1]

    test-section

    This article originally appeared on Engadget at https://www.engadget.com/technology/gaming/test-section-095524709.html?src=rss

    [ad_2]

    Test Yessis

    Source link

  • What to expect at Google I/O 2024: Gemini, Android 15, WearOS and more details

    What to expect at Google I/O 2024: Gemini, Android 15, WearOS and more details

    [ad_1]

    It’s almost that time again, folks: we’re about to find out about some of Google’s big ideas for the year ahead at its I/O developer conference. Most of the big news will come from the opening keynote on May 14, which will almost certainly give us more info on Android 15 as well as a whole bunch of AI updates.

    There will surely be some surprises, though we’ll more than likely need to wait until the fall to get the full lowdown on the company’s latest flagship hardware.

    As always, the rumor mill has been churning away with a number of reports highlighting what Google is likely to discuss at I/O. To that end, here’s what to expect from the Google I/O 2024 keynote:

    Starting at $1,799, the Pixel Fold is Google's first attempt at making a flagship flexible phone.

    Photo by Sam Rutherford/Engadget

    I/O is a developer conference first and foremost. This is always where Google gives third-party devs the full lowdown on the next major Android version so they can start working on apps for it or modify their existing products.

    The first Android 15 betas are already out in the wild. Among the features are an updated Privacy Sandbox, partial screen sharing (so you can record or share a certain app window instead of the entire screen) and system-level app archiving to free up space. There’s also improved satellite connectivity, additional in-app camera controls and a new power efficiency mode.

    However, Google is saving the bulk of the Android 15 announcements for I/O. The company has confirmed satellite messaging is coming to Android, and we could find out more about how that works. Lock screen widgets may be a focus for tablets, while Google might place an onus on an At a Glance widget for phones. A status bar redesign may be in the offing, and it may at long last be easier for you to monitor battery health.

    Wake words may once again be in the offing for third-party assistants such as Alexa and even ChatGPT. Rumors also suggest there may be a feature called Private Space to let you hide data and apps from prying eyes.

    A photo of a phone screen and a computer screen showing the Gemini chatbot on their displays.A photo of a phone screen and a computer screen showing the Gemini chatbot on their displays.

    Google

    If you drop a dollar into a jar every time someone mentions AI during the keynote, you’ll probably stash away enough cash for a vacation. The safe money’s on Google talking about Gemini AI, which may end up replacing Assistant entirely. If that’s the case, we could find out some of the details about the transition at I/O.

    Back in December, it was reported that Google was working on an AI assistant called Pixie as an exclusive feature for Pixel devices. Pixie is said to be based on Gemini and may debut in the Pixel 9 later this year, so it would make sense for the company to start discussing that at I/O.

    It wouldn’t be a surprise to learn about generative AI updates for key Google products such as Search, Chrome, Maps and G Suite. AI-driven accessibility features and health projects may be in the offing too. Meanwhile, with Google once again delaying its plan to kill off third-party cookies in Chrome, it may see AI as a solution to ad targeting and spill the beans on any plan for that at I/O.

    Google display on compatible carsGoogle display on compatible cars

    Google

    The full I/O schedule offers some insight into what else Google will discuss, even if those products and services won’t necessarily get airtime in the keynote.

    Google has lined up a panel on the future of Wear OS, which will include details on “advances in the Watch Face Format,” so expect some news about its smartwatch operating system. There will also be updates on Google TV and Android TV.

    Meanwhile, Google’s quantum computing team will talk about what’s feasible in the space and attempt to separate fact from fiction. An Android Auto panel is on the schedule too, hinting at developments for multi-display and casting experiences.

    A medium shot of the blue Pixel 8 Pro, focusing on its camera bar and the temperature sensor in it.A medium shot of the blue Pixel 8 Pro, focusing on its camera bar and the temperature sensor in it.

    Photo by Cherlynn Low / Engadget

    It would be a major surprise for Google to reveal a Pixel 9 or a new Pixel Fold this early in the year. The company is probably going to save those details for the fall ahead of those devices going on sale around that time. However, it did formally reveal the Pixel Fold at I/O last year, so we could get a glimpse of some hardware — especially if it wants to get out ahead of the leakers and control the narrative.

    On the other hand, Google recently consolidated its Android and hardware teams under Rick Osterloh. His team may want a little more prep to make sure new devices are ready for primetime under the latest regime. As such, any hardware news (including anything to do with Nest or wearables) could be a little farther out.

    [ad_2]

    Kris Holt

    Source link

  • Most App Store developers aren’t taking Apple up on its new outside payments option

    Most App Store developers aren’t taking Apple up on its new outside payments option

    [ad_1]

    It seems Apple’s recently added option for App Store developers to include links to external payment methods isn’t actually all that appealing. In a hearing on Friday as part of the ongoing legal battle with Epic, Apple said only 38 developers have applied to add such links — out of roughly 65,000 that could, according to . The new guidelines, , require developers get Apple’s approval before they can add alternative payment options and stipulate that they’ll still have to pay a commission fee of up to 27 percent.

    The changes were intended to satisfy an injunction ordered by U.S. District Judge Yvonne Gonzalez Rogers in 2021, but, per , Epic in March called Apple’s attempt at compliance “a sham” and filed a complaint with the court. At this point, Rogers doesn’t really seem impressed either. “It sounds to me as if the goal was to then maintain the business model and revenue you had in the past,” Rogers said of Apple’s solution during the latest hearing, according to Bloomberg.

    On top of Apple’s commission, developers also need to consider payment processing fees, which altogether could lead to them paying even more than they did before. “You’re telling me a thousand people were involved [in approving the new fee] and not one of them said maybe we should consider the cost [to developers]?” the judge reportedly said.

    [ad_2]

    Cheyenne MacDonald

    Source link

  • ‘Extreme’ geomagnetic storm may bless us with more aurora displays tonight and tomorrow

    ‘Extreme’ geomagnetic storm may bless us with more aurora displays tonight and tomorrow

    [ad_1]

    The strongest geomagnetic storm in 20 years made the colorful northern lights, or aurora borealis, visible Friday night across the US, even in areas that are normally too far south to see them. And the show may not be over. Tonight may offer another chance to catch the aurora if you have clear skies, according to the NOAA, and Sunday could bring yet more displays reaching as far as Alabama.

    The NOAA’s Space Weather Prediction Center said on Saturday that the sun has continued to produce powerful solar flares. That’s on top of previously observed coronal mass ejections (CMEs), or explosions of magnetized plasma, that won’t reach Earth until tomorrow. The agency has been monitoring a particularly active sunspot cluster since Wednesday, and confirmed yesterday that it had observed G5 conditions — the level designated “extreme” — which haven’t been seen since October 2003. In a press release on Friday, Clinton Wallace, Director, NOAA’s Space Weather Prediction Center, said the current storm is “an unusual and potentially historic event.”

    Geomagnetic storms happen when outbursts from the sun interact with Earth’s magnetosphere. While it all has kind of a scary ring to it, people on the ground don’t really have anything to worry about. As NASA explained on X, “Harmful radiation from a flare cannot pass through Earth’s atmosphere” to physically affect us. These storms can mess with our technology, though, and have been known to disrupt communications, GPS, satellite operations and even the power grid.

    [ad_2]

    Cheyenne MacDonald

    Source link

  • Get up to $450 off a Google Pixel Tablet when you trade in your old iPad or Android slab

    Get up to $450 off a Google Pixel Tablet when you trade in your old iPad or Android slab

    [ad_1]

    Google has an offer for iPad owners who are curious about the Pixel Tablet. The company has a trade-in promotion that covers at least the cost of the Pixel Tablet for iPad owners — if not more, depending on which model you have. It works with Samsung tablets as well, but those trade-in values are lower. The Pixel Tablet costs $399 (without deals) for 128GB storage and no charging speaker dock.

    Google

    Get the Pixel Tablet for free with an eligible trade-in.

    with trade-in of an eligible tablet

    $399 at Google Store

    The promo works with iPads as old as the sixth-generation model from six years ago. For that, Google will give you a surprising $399 — matching the Pixel Tablet’s base cost. That iPad model only cost $329 in 2018, so Google is overpaying by a lot for that one.

    However, Google balances that with much worse offers for modern, high-end iPads. For example, the 12.9-inch iPad Pro with M2 chip (2022) only nets $450. Until this week (when the company launched a new iPad Pro and iPad Air), Apple sold that model for $1,099, so we don’t recommend that trade-in price. If you’re done with a high-end iPad from the last few years, you can likely sell it on places like eBay, Craigslist or Swappa for significantly more.

    View of the Pixel Tablet on a shelf next to books and oddities.View of the Pixel Tablet on a shelf next to books and oddities.

    Sam Rutherford for Engadget

    The Pixel Tablet stands out from its Android-running competitors by working with a charging speaker base that lets the device double as a smart display, making it much more versatile. Engadget’s Cherlynn Low thought that part overshadowed its core functionality as a tablet. “As a smart display, the Pixel Tablet mostly shines. It has a useful dashboard, an easy-to-read interface and impressive audio quality,” she wrote in our full review.

    The tablet has a 10.95-inch display with a 2,560 x 1,600 resolution (276 PPI) and runs on a Google Tensor G2 chip. It weighs slightly over a pound and is lighter than Android rivals like the Galaxy Tab S8 and OnePlus Pad. Its back has a nano-ceramic coating that gives it a premium, glass-like feeling that you may not expect from a $399 device.

    Accessories are where the Pixel Tablet stands out the most. Google’s Pixel Tablet Case, sold separately for $79, has a built-in kickstand that makes the slate more versatile. “What I love about the kickstand-hanger-combo is that it allows you to place the Tablet pretty much anywhere,” Low wrote in Engadget’s review. “So when I want to hang it off a kitchen cabinet to follow along with a recipe video or keep watching Love Is Blind for example, I can. And though the 2,560 x 1,600 LCD panel isn’t as vibrant as the OLED on Samsung’s Galaxy Tabs, it still produced crisp details and colorful images.”

    The star accessory is Google’s $129 charging speaker dock, which you can use without removing the kickstand case. This product transforms the tablet into a smart display, potentially voiding the need for other smart home control hubs. The speaker has impressive sound for its size, making it easier to hear its responses if you aren’t right next to it.

    Google’s fine print notes that the trade-in value will be finalized after receiving the tablet, and it could be lower if it determines the condition doesn’t match what you selected during the trade-in process. The refund will be processed on the credit card you used to buy the Pixel Tablet (or through Google Store credit if you return your purchase during that time).

    Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

    [ad_2]

    Will Shanklin

    Source link

  • The 2023 Echo Show 8 is on sale for $100 right now

    The 2023 Echo Show 8 is on sale for $100 right now

    [ad_1]

    Last year Amazon upgraded its Echo Show 8 to make it look better, sound better and respond more quickly to Alexa commands. It made our best smart display list, and if you’ve been eyeing one, it’s on sale at a steep discount. The third-gen, 2023 Echo Show 8 is 33 percent off, bringing it down to just $100 ($50 off), only $10 off the all-time low. Amazon also has stellar deals on the Echo Dot and Echo Pop, offering them for $28 and $20 respectively.

    Amazon

    Amazon’s Echo Show 8 (3rd generation) has a new design and new features that make it more responsive to Alexa commands.

    $100 at Amazon

    The 2023 Echo Show 8 has a new design with edge-to-edge glass and softer curves that help it blend into your decor. Inside, it comes with new spatial audio with room calibration that allows for fuller sound than previous model. Meanwhile, video calling benefits from crisper audio and a 13-megapixel camera.

    The new Adaptive Content feature changes what’s shown on the screen based on where you are in the room. If you’re standing far away, it’ll display easily digestible information in large font, like the weather or news headlines. As you get closer, it’ll switch to a more detailed view. It can also show personalized content for anyone enrolled in visual ID, surfacing your favorite playlists and other content.

    It also boasts 40 percent faster response times for Alexa thanks to its upgraded processor. For privacy-conscious buyers, it has a physical camera shutter that’s controlled with a slider on the top of the device. There’s also a button to turn off the mic and camera. As mentioned, the Echo Show (3rd gen) is on sale for $100 in either charcoal or glacier white.

    If you only need a small Echo speaker device for an extra room, Amazon is also selling the 5th-generation Dot for just $28, a steep 44 percent off the regular $50 price. That device has the best sound yet for a Dot device, while offering Alexa, smart home features and more. Amazon’s smallest device, the Echo Pop, also offers Alexa features and is on sale for just $20.

    Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

    [ad_2]

    Steve Dent

    Source link

  • The Beats Fit Pro wireless earbuds are on sale for $160 right now

    The Beats Fit Pro wireless earbuds are on sale for $160 right now

    [ad_1]

    Beats might have to blab about, but its older models are nothing to sniff at, especially when you can score solid deals on them. Take, for instance, the . Those true wireless earbuds have . That matches the Black Friday price and it’s just $10 more than the all-time low.

    Beats

    Our top pick for workout and running headphones is 20 percent off at the minute.

    $160 at Amazon

    The Beats Fit Pro are our pick for the as well as our top choice for . They’re rated for IPX4 water resistance, which is always welcome to have while you’re working up a sweat. They’re comfortable to wear and have solid battery life (six hours plus an extra 21 hours from the charging case).

    None of that would matter if the Beats Fit Pro sounded terrible, but they deliver great sound quality with the help of Adaptive EQ. Spatial audio is always a nice feature to have, while the active noise cancellation and transparency modes are solid. Multipoint connectivity is a plus too.

    On the downside, we thought that the charging case felt cheap with a poor build quality. We also found it too easy to accidentally press the onboard controls. Still, if you’re looking for a pair of earbuds for your workouts, you can’t get much better than the Beats Fit Pro right now.

    Elsewhere, the are also on sale. they’re effectively half off and just $10 more than the record low of $170.

    Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

    [ad_2]

    Kris Holt

    Source link

  • Here’s what the long-rumored Sonos wireless headphones will look like

    Here’s what the long-rumored Sonos wireless headphones will look like

    [ad_1]

    Bloomberg had reported back in February the Sonos’ long-rumored and long-awaited headphones are dropping in June, a month later than the company originally intended due to a software issue. While Sonos itself has yet to release details about the device, its Dutch authorized dealer Schuurman seems to have published information and images of the headphones ahead of time. A Redditor in the Sonos group has discovered Schuurman’s listings (via The Verge) after someone else found out that the wireless headphones will officially be called the Sonos Ace.

    A pair of headphones, wireless and a carrying case.

    Sonos

    Based on the images, the Ace device package will come with the headphones, some wires and a carrying case. It looks like the headphones themselves will have buttons and a toggle switch on the earphone parts of the device. The images are pretty low-quality, so we can’t comment on how premium the model looks, but it does seem like the device is going to be a pair of over-ear headphones. Schuurman has listed the device package for €403.58 ($435), which is pretty near the $449 pricing Bloomberg mentioned in its previous report.

    As the news organization said at the time, Sonos CEO Patrick Spence is hoping that launching the new device category can help fuel growth for the company known for its speakers and sound bars after years of sluggish sales. The upcoming Ace headphones were reportedly designed to work with the company’s existing devices and can stream audio directly from TVs and music streaming services using its built-in Wi-Fi connection. Bloomberg said that Sonos is also looking into the possibility of releasing an in-ear model in the future to compete with Apple’s AirPods and other similar products.

    A screenshot of the Schuurman website.A screenshot of the Schuurman website.

    Schuurman

    [ad_2]

    Mariella Moon

    Source link

  • Proton’s new password monitor update will scour the dark web on your behalf

    Proton’s new password monitor update will scour the dark web on your behalf

    [ad_1]

    Proton’s encrypted password manager, Proton Pass, has received a significant update . This comes in the form of a new toolset called Pass Monitor, which will alert users of account weaknesses and data breaches.

    This is done automatically and the system will even guide users through solutions in the event of a data leak from a third-party service, . It also scours the dark web and alerts people if Proton addresses, email aliases and up to ten custom email addresses have been leaked and used for nefarious purposes. If this happens, you’ll get an alert so you can take action.

    Pass Monitor includes a password health feature that flags any weak or reused passwords that could use an update. The inactive two-factor authentication portion of the toolset is an additional layer of security that identifies various accounts that offer the option for 2FA.

    Finally, the company’s bringing its into Pass Monitor. The service uses a combination of AI and human analysts to detect and block account takeover attacks.

    The password health and 2FA checks are available to free users, but monitoring of the dark web and Proton Sentinel are only for paying members. Luckily, Pass Plus memberships are currently . These new tools, available on Windows, Android and iOS, will roll out to current users in the “next few days.”

    Proton is actually a fairly new entrant in the password security game, as the password manager . The company is more famous for its stellar VPN software, which topped .

    This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.

    [ad_2]

    Lawrence Bonk

    Source link