ReportWire

Tag: Jony Ive

  • Apple’s Magic Mouse is down to $68 right now

    [ad_1]

    Apple’s USB-C Magic Mouse is back on sale for about $11 off its usual retail price of $79. At $68, that’s a savings of 14 percent for one of Apple’s best accessories from a company that does not often run sales.

    The multi-touch mouse was first released in 2009 with a modest refresh released in 2015 and the addition of a USB-C port in 2024. The rechargeable mouse features gesture controls and automatically pairs with your Mac when connected via USB. The Magic Mouse can also be used with an iPad via Bluetooth, or with a Windows PC, though in that case, functionality would be limited.

    Famously, Jony Ive’s design of the Magic Mouse sees its charge port on the underside of the body, rendering it unusable while charging. In 2024 there were rumors of a more comprehensive redesign coming but nothing has materialized since.

    Apple

    The deal only applies to the white model. 

    Follow @EngadgetDeals on X for the latest tech deals and buying advice.

    [ad_2]

    Andre Revilla

    Source link

  • iPhone designer Sir Jony Ive gets physical with Ferrari Luce EV – Tech Digest

    [ad_1]

    Share

    Sir Jony Ive, the man who famously stripped the physical keyboard away from the mobile with his iconic iPhone design, has taken a very different approach with his latest project for Ferrari.. 

    As Ferrari unveils the “Luce”, the famous Italian brand’s first fully electric sports car, the spotlight has fallen on an interior designed by Ive and his creative collective, LoveFrom.

    While the rest of the automotive industry is currently engaged in a “screen war,” competing to see who can bolt the largest and shiniest tablet to a dashboard, Ive has taken a radical detour back to physical buttons.

    The cabin of the Luce is a study in what Ive calls “clarity and simplicity.” Eschewing the fingerprint-smudged glass slabs that define modern EVs, the Luce features a meticulously organized environment of physical toggles, machined aluminium switches and Gorilla Glass dials.

    It is a design that feels both retro and deeply human, drawing inspiration from the aviation-style gauges and three-spoke Nardi steering wheels of 1950s Ferraris. However, it is the philosophy behind these controls, rather than just their aesthetic, that marks Ive’s most significant departure from contemporary car design.

    Even the gear selector in the Ferrari Luce has had a Jonny Ive makeover

    Touch screens wrong for car interiors, claims Ive

    Ive’s primary critique of the modern car interior is one of safety and soul. In a startling admission for the father of the modern touchscreen, he has declared that touch is the “wrong technology” for a car’s primary interface.

    It’s an argument grounded, he claims, in the reality of driving because a touchscreen requires a driver to look away from the road to confirm they have hit a virtual button. In a Ferrari, where performance is measured in split seconds, that distraction is more than an annoyance – it is a design flaw.

    To solve this, Ive has focused on “eyes-busy, hands-on” interaction. Every switch in the Luce has been engineered to feel different to the touch, allowing a driver to adjust the climate, suspension, or drive modes by muscle memory alone.


    The centerpiece of this experience is a physical key made of Gorilla Glass with an E Ink display. When docked into the centre console, it triggers a “startup ceremony” where light flows from the key across the dashboard’s digital-analogue hybrid dials.

    Even the gear selector has been reimagined as a technical work of art, using laser-drilled holes half the width of a human hair to back-light the graphics.

    By prioritizing these mechanical interactions, Ive is attempting to restore a sense of visceral connection that many fear will be lost in the transition to electric power. And while the Luce features state-of-the-art Samsung OLED displays, they are often tucked behind physical needles or layered with lenses to create a “parallax effect” that mimics the depth of an old-school chronograph.

    It is a sophisticated rejection of the “easy and lazy” design trend of massive screens, proving that, for Jony Ive, the future of the electric car isn’t just about how it moves, but how it feels beneath your fingertips.


    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    [ad_2]

    Chris Price

    Source link

  • OpenAI Says Its Physical Device Is ‘On Track’ for an Unveiling Later This Year

    [ad_1]

    On Monday, Chris Lehane, OpenAI’s chief global affairs officer, said his company is “on track” to present its famously mysterious thingamajig to the public by the end of the year according to Axios. This would mean the previously rumored release date, September-ish, was not crazy after all.

    Lahane’s announcement came during an event at the World Economic Forum in Davos, Switzerland. However, Lahane did not provide any details about what this thing is or does. He also, according to Axios, said what he had described was the “most likely” release schedule, but that “we will see how things advance.”

    For details about what the device is and does, you’ll have to read the aforementioned rumors from the China-based leaks account Smart Pikachu. That user posted a week ago that OpenAI is supposedly gunning for the market niche currently occupied by AirPods.

    Smart Pikachu described manufacturing giant Foxconn working on something with the codename “Sweetpea,” a “special audio product” within a company project called “Gumdrop,” vaguely in the earbud or “open-ear headphones” zone. It would be two objects—one for each ear—and a little egg-shaped, dental-floss-holder-sized charging dock. Sweetpea would pack heavy duty processing power via a 2-nanometer, smartphone-style chip. Its release might also be followed, or accompanied, by four other “Gumdrop” devices between now and 2028, like a “home-style device,” and, um, a pen, according to Smart Pikachu. And once again: these are just unconfirmed rumors at this point.

    But you’ll recall that the vast majority of the actual information OpenAI has given the world so far about its first device comes from two sources: 1) A very strange infomercial for the concept of friendship that OpenAI released in spring of last year starring OpenAI CEO Sam Altman and legendary iPhone designer Jony Ive—whose product design company had just merged with OpenAI.

    And 2) a much longer—but somehow less substantive—interview Ive and Altman gave in November in which they explained next to nothing, other than the fact that they’re aiming for a product so sensual that you’ll want to put various parts of your mouth all over it. Altman said it’s “so simple, but then it just does,” whatever that means. Ive said he’s into creating “sophisticated products that you want to touch and you feel no intimidation and you want to use almost carelessly and almost without thought.”

    So there you go. It just does, and you won’t even think about it, and you’ll want to smooch it, and it might be available before the midterms. What more do you need to know?

    [ad_2]

    Mike Pearl

    Source link

  • 4 A.I. Themes That Defined 2025 and Are Shaping What Comes Next

    [ad_1]

    From infrastructure battles to physical-world intelligence, A.I.’s next chapter is already taking shape. Unsplash

    In November, ChatGPT turned three, with a global user base rapidly approaching one billion. At this point, A.I. is no longer an esoteric acronym that needs explaining in news stories. It has become a daily utility, woven into how we work, learn, shop and even love. The field is also far more crowded than it was just a few years ago, with competitors emerging at every layer of the stack.

    Over the past year, conversation around A.I. has taken on a more complicated tone. Some argue that consumer chatbots are nearing a plateau. Others warn that startup valuations are inflating into a bubble. And, as always, there’s the persistent anxiety that A.I. may one day outgrow human control altogether.

    So what comes next? Much of the industry’s energy is now focused on the infrastructure side of A.I. Big Tech companies are racing to solve the hardware bottlenecks that limit today’s systems, while startups experiment with applications far beyond chatbots. At the same time, researchers are beginning to look past language models altogether, toward models that can reason about the physical world.

    Below are the key themes Observer has identified over the past year of covering this space. Many of these developments are still unfolding and are likely to shape the field well into 2026 and beyond.

    A.I. chips

    Even as OpenAI faces growing competition at the model level, its primary chip supplier, Nvidia, remains in a league of its own. Demand for its GPUs continues to outstrip supply, and no rival has yet meaningfully disrupted its dominance. Traditional semiconductor companies such as AMD and Intel are racing to claw back market share, while some of Nvidia’s largest customers are designing their own chips to reduce dependence on a single supplier.

    Google’s long-in-the-making Tensor Processing Unit, or TPU, has reportedly found its first major customer, Meta, marking a milestone after years of internal use. Meta, Microsoft and Amazon are also deep into developing in-house chips of their own—Meta’s Artemis, Microsoft’s Maia and Amazon’s Trainium.

    World models

    To borrow from philosopher Ludwig Wittgenstein, the limits of language are the limits of our world. Today’s A.I. systems have grown remarkably fluent in human language—especially English—but language captures only a narrow slice of intelligence. That limitation has prompted some researchers to argue that large language models alone can never reach human-level understanding.

    Meta’s longtime chief A.I. scientist, Yann LeCun, has been among the most vocal critics. “We’re never going to get to human-level A.I. by just training on text,” he said during a Harvard talk in September.

    That belief is fueling a push toward so-called “world models,” which aim to teach machines how the physical world works—how objects move, how space is structured, and how cause and effect unfold. LeCun is now leaving Meta to build such a system himself. Fei-Fei Li’s startup, World Labs, unveiled its first model in November after nearly two years of development. Google DeepMind has released early versions through its Genie projects, and Nvidia is betting heavily on physical A.I. with its Cosmos models.

    Language-specific A.I.

    While pioneering researchers look beyond language, linguistic barriers remain one of A.I.’s most practical challenges. More than half of the internet’s content is written in English, skewing training data and limiting performance in other languages.

    In response, developers around the world are building models rooted in local cultures and linguistic norms. In Japan, companies such as Sanaka and NTT are developing LLMs tailored to Japanese language and values. In India, Krutrim is working to support the country’s vast linguistic diversity. France’s Mistral AI has positioned its Le Chat assistant as a European alternative to ChatGPT. Earlier this year, Microsoft also issued a call for proposals to expand training data across European languages.

    A.I. wearables

    It’s only natural that there’s a consumer hardware angle of A.I. This year brought a wave of experiments in wearable A.I.—some met with curiosity, others with discomfort.

    Friend, a startup selling an A.I. pendant, sparked backlash after a New York City subway campaign framed its product as a substitute for human companionship. In December, Meta acquired Limitless, the maker of a $99 wearable that records and summarizes conversations. Earlier in the year, Amazon bought Bee, which produces a $50 bracelet designed to transcribe daily activity and generate summaries.

    Meta is also developing a new line of smart glasses with EssilorLuxottica, the company behind Ray-Ban and Oakley. In July, Mark Zuckerberg went so far as to suggest that people without A.I.-enhanced glasses could eventually face a “significant cognitive disadvantage.” Meanwhile, OpenAI is quietly collaborating with former Apple design chief Jony Ive on a mysterious hardware project of its own. This all suggests the next phase of A.I. may be something we wear, not just something we type into.

    4 A.I. Themes That Defined 2025 and Are Shaping What Comes Next

    [ad_2]

    Sissi Cao

    Source link

  • Sam Altman Hints at the Radical Design Choices Behind OpenAI’s Upcoming Devices

    [ad_1]

    Sam Altman claims that the AI device that OpenAI is currently building with famed Apple designer Jony Ive is actually a family of devices, and teased that they will likely not include a key component used by nearly every other smart device. 

    In a video interview on Dec. 18, Big Technology’s Alex Kantrowitz pushed the OpenAI co-founder and CEO to provide additional news about the device, which was officially announced in May. Responding to rumors that the device would be phone-sized and lack a screen, Altman clarified that OpenAI will be releasing a “small family of devices,” rather than a single device. 

    As for the rumored lack of a screen, Altman didn’t provide any firm information, but did opine on the current state of user interfaces for AI applications. He predicted that over time, computers will evolve from “dumb, reactive” machines into smarter, “proactive” entities that can understand user intent. But current devices, like laptops and smartphones, are not “well-suited to that kind of world.” 

    Altman said that he wants OpenAI’s devices to break some of the “unquestioned assumptions” around how smart devices work, since AI is such a unique technology, with screens being a prime example of such an assumption. Using a screen would limit OpenAI’s device to “the same way we’ve had graphical user interfaces working for many decades,” says Altman, and a keyboard would only slow down interactions. 

    “I don’t think the current form factor of devices is the optimal fit, it’d be very odd if it were, for this incredible new affordance we have,” Altman said. 

    Ive has also expressed a distaste for screens in devices over recent years, and has even expressed regret for the “unintended consequences” of his role in popularizing smart devices with screens through the iPhone and iPad. 

    In a November interview, Ive said that he “can’t bear products that are like a dog wagging their tail in your face,” and that the new devices will be designed to spark joy in users. The devices are still a long way away, though. Ive said in that same interview that he plans to reveal the devices within two years.

    [ad_2]

    Ben Sherry

    Source link

  • OpenAI’s Secretive A.I. Gadget Designed by Jony Ive Aims to Redefine Tech’s Vibe

    [ad_1]

    An A.I. device project spearheaded by Sam Altman and Jony Ive has earned the backing of Laurene Powell Jobs. Barbara Kinney/Emerson Collective

    Sam Altman and Jony Ive have stayed painstakingly cryptic about what their collaborative A.I. hardware device will ultimately look like. So far, the OpenAI CEO and former Apple designer have shared only that the product will be less clunky than a laptop and less screen-focused than a smartphone. Their latest hint, meanwhile, speaks to the product’s overall “vibe.”

    Current devices can feel like walking through Times Square, with all “the little indignities along the way: flashing lights in my face, tension going here, people bumping into me, noises going off,” Altman said at a recent event hosted by Laurene Powell JobsEmerson Collective. OpenAI’s upcoming device, he added, will instead evoke the feeling of “sitting in the most beautiful cabin by a lake in the mountains and just sort of enjoying the peace and calm.”

    Altman and Ive officially joined forces in May when OpenAI acquired the designer’s hardware startup, io, which previously received backing from Powell Jobs, in a $6.5 billion deal. The acquisition brought Ive into the fold to oversee OpenAI’s efforts to design a consumer-facing A.I. device that reimagines how people interact with technology.

    “What I went to with Sam wasn’t a product but a tentative thesis. It was a thought about the nature of objects and our interface,” Ive said at the same event, declining to offer more details about the pitch he delivered.

    What little the pair have disclosed about their project remains frustratingly vague. The initial design goal was to create something users “want to lick or take a bite out of,” Altman said, adding that an early prototype was scrapped in part because it didn’t fit that description.

    They appear to have since crossed that threshold. According to Altman, their work has now produced its first prototypes, which he described as “jaw-droppingly good.” The final product is expected to arrive in under two years, giving users plenty of time to, as he joked, lick and bite the device to their heart’s content.

    Altman and Ive have emphasized that their device will not be another smartphone and have repeatedly warned about the harmful effects of today’s dominant tech products. Nonetheless, from the clues they’ve offered, their approach seems to echo Apple’s sleek design language. OpenAI’s device will be “playful” and full of “whimsy,” Altman said, describing it as so minimal that consumers will look at it and say, “That’s it?”

    Ive, too, stressed restraint and simplicity. “I can’t bear products that are like a dog wagging its tail in your face, or products that are so proud that they solve the complicated problem and want to remind you of how hard it is,” said the designer. “I love solutions that teeter on appearing almost naive in their simplicity.”

    Even as they try to avoid the pitfalls of modern consumer tech—devices that can fuel unhealthy relationships—the duo are also working toward a release with societal impact on par with landmark products like the iPhone. When asked which device he uses most often, Altman pointed to the iPhone, calling it “the most ‘before-and-after-moment’ product of my life.”

    OpenAI’s Secretive A.I. Gadget Designed by Jony Ive Aims to Redefine Tech’s Vibe

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • OpenAI’s first device with Jony Ive could be delayed due to ‘technical issues’

    [ad_1]

    OpenAI and Jony Ive could still have some serious loose ends to tie up before releasing their highly anticipated AI device. According to a Financial Times report, the partnership is still struggling with some “technical issues” that could ultimately end up pushing back the device’s release date, which is expected to be sometime in 2026.

    One of those lingering dilemmas involves figuring out the AI assistant’s voice and mannerisms, according to FT‘s sources. The AI device is meant to be “a friend who’s a computer who isn’t your weird AI girlfriend,” according to a FT source who was briefed on the plans. Beyond landing on a personality, OpenAI and Ive are still figuring out potential privacy concerns stemming from a device that’s always listening. On top of that, the budget could reportedly be a challenge due to the increased computing power necessary to run these mass-produced AI devices.

    Outside these latest struggles, we still know very little about the upcoming product. Sam Altman, OpenAI’s CEO, reportedly offered some clues to employees that it could be pocket-sized, aware of its environment and sans display. There are still plenty of questions about what OpenAI’s first hardware project will amount to, but the company could be exercising more caution since similar devices, like the Humane AI Pin, were discontinued after failing to deliver on sales.

    [ad_2]

    Jackson Chen

    Source link

  • Sam Altman to Join Microsoft Following OpenAI Ouster

    Sam Altman to Join Microsoft Following OpenAI Ouster

    [ad_1]

    Updated Nov. 20, 2023 6:34 am ET

    SAN FRANCISCO—Microsoft said it is hiring Sam Altman to helm a new advanced artificial-intelligence research team, after his bid to return to OpenAI fell apart Sunday with the board that fired him declining to agree to the proposed terms of his reinstatement.

    Microsoft Chief Executive Satya Nadella posted on X (formerly Twitter) late Sunday that Altman and Greg Brockman, OpenAI’s president and co-founder who resigned Friday in protest over Altman’s ouster, will lead its team alongside unspecified colleagues. Nadella said Microsoft was committed to its partnership with OpenAI and that it would move quickly to provide Altman and Brockman with “the resources needed for their success.” 

    Copyright ©2023 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

    [ad_2]

    Source link