ReportWire

Tag: google ai

  • The good, bad, and the ugly of Apple’s AI deal with Google | Fortune

    [ad_1]

    Apple and Google’s surprise AI partnership announcement on Monday sent shockwaves across the tech industry (and lifted Google’s market cap above $4 trillion). The two tech giants’ deal to infuse Google’s AI technology into Apple’s mobile software, including in an updated version of the Siri digital assistant, has major implications in the high-stakes battle to dominate AI and to own the platform that will define the next generation of computing.

    While there are still many unanswered questions about the partnership, including the financial component and the duration of the deal, some key takeaways are already clear. Here’s why the deal is good news for Google, so-so news for Apple, and bad news for OpenAI.

    The deal is further validation that Google has got its AI mojo back

    When OpenAI debuted ChatGPT in November 2022, and throughout a good part of the next two years, many industry observers had their doubts about Google’s prospects in the changing landscape. The search giant at times appeared to be floundering as it raced to field models that could be as capable as OpenAI’ s ChatGPT and Anthropic’s Claude. Google endured several embarrassing product debuts, when its Bard chatbot and then its successor Gemini models got facts wrong, recommended glue as a pizza topping, and generated images of historically anachronistic Black Nazis.

    But today, Google’s latest Gemini models (Gemini 3) are among the most capable on the market and gaining traction among both consumers and businesses. The company has also been attracting lots of customers to its Google Cloud, in part because of the power of its bespoke AI chips, called tensor processing units (or TPUs), which may offer cost and speed advantages over Nvidia’s graphics processing units (GPUs) for running AI models.

    Apple’s statement on Monday that “after careful consideration” it had determined that Google’s AI technology “provides the most capable foundation for Apple Foundation Models” served as Gemini’s ultimate validation—particularly given that until now, OpenAI was Apple’s preferred technology provider for “Apple Intelligence” offerings. Analysts at Bank of America said the deal reinforced “Gemini’s position as a leading LLM for mobile devices” and should also help strengthen investor confidence in the durability of Google’s search distribution and long-term monetization.

    Hamza Mudassir, who runs an AI agent startup and teaches strategy and policy at the University of Cambridge’s Judge School of Business, said Apple’s decision is likely about more than just Gemini’s technical capabilities. Apple does not allow partners to train on Apple user data, and Mudassir theorized that Apple may have concluded Google’s control over its ecosystem—such as owning its own cloud—could provide data privacy and intellectual property guarantees that perhaps OpenAI or Anthropic couldn’t match.

    The deal also likely translates directly into revenue for Google. Although the financial details of the were not disclosed, a previous report from Bloomberg suggested Apple was paying Google about $1 billion a year for the right to use its tech.

    The bigger prize for Google may be the foot-in-the-door the deal provides to Apple’s massive distribution channel: the approximately 1.5 billion iPhone users worldwide. With Gemini powering the new version of Siri, Google may get a share of any revenue those users generate through product discovery and purchases made through a Gemini-powered Siri. Eventually, it might potentially even lead to an arrangement that would see Gemini’s chatbot app pre-installed on iPhones.

    For Apple, the implications of the deal are a bit more ambivalent

    Apple’s Tim Cook
    David Paul Morris/Bloomberg via Getty Images

    The iPhone maker will obviously benefit from giving users a much more capable Siri, as well as other AI features, at an attractive cost and while guaranteeing user privacy. Dan Ives, an equity analyst who covers Apple for Wedbush, said in a note the deal provided Apple with “a stepping stone to accelerate its AI strategy into 2026 and beyond.”

    But Apple’s continuing need to rely on partners—first OpenAI and now Google—to deliver these AI features is a worrisome sign, suggesting that Apple, a champion of vertical integration, is still struggling to build its own LLM.

    It’s a problem that has dogged the company since the beginning of the generative AI era: For months last year several Apple Intelligence features were delayed, and the long-awaited debut of an updated Siri has been pushed back numerous times. These delays have taken a toll on Apple’s reputation as a tech leader and angered customers, some of whom filed a class action lawsuit against the company after the AI features promoted in ads for the iPhone 16 weren’t initially available on the device.

    When Apple CEO Tim Cook promised an updated version of Siri would be released in 2026, many assumed it would be powered by Apple’s own AI models. But apparently those models are not yet ready for prime time and the new Siri will be powered by Google instead.

    Daniel Newman, an analyst at the Futurum Group, said that 2026 is a “make-or-break year” for Apple. “We have long said the company has the user base and distribution that allows it to be more patient in chasing new trends like AI, but this is a critical year for Apple,” Newman said.

    Cook has shaken up the ranks, installing a new head of AI who previously worked at Google on Gemini. And, if the delays turn out to be related to Apple’s specific requirements around things like privacy, it may ultimately prove to have been worth the wait. Ideally, Apple would want an AI model that matches the capabilities of those from OpenAI, Anthropic, and Google but which is compact enough to run entirely on an iPhone, so that user data does not have to be transmitted to the cloud. It’s possible, said Mudassir, that Apple is grappling with technical limitations involving the amount of power these models consume and how much heat they generate. Partnering with Google buys Apple time to make breakthroughs in compression and architecture while also getting Wall Street “off its back,” he said.

    Apple defenders note that the company is rarely a first mover in new technology—it was not the first to create an MP3 player, a smartphone, wireless earphones, or a smart watch, yet it came from behind to dominate many of those product categories with a combination of design innovation and savvy marketing. And Apple has a history of learning from partners for key technology, such as chips, before ultimately bringing these efforts in-house.

    Or, in the case of internet search, Apple simply partnered with Google for the long-term, using the Google engine to handle search queries in its Safari browser. The fact that Apple never developed its own search engine has not hurt its growth. Could the same principle hold true for AI?

    But the Apple-Google tie up is almost certainly bad news for OpenAI

    OpenAI CEO Sam Altman
    Florian Gaertner/Photothek via Getty Images

    While the Google partnership is not exclusive, meaning that Apple may continue to rely on OpenAI’s models for some of its Apple Intelligence features and OpenAI still has a chance to prove its models’ worth to Cupertino, Apple’s decision to go with Google is definitely a blow. At the very least, it solidifies the narrative that Google has not only caught up with OpenAI, but has now edged past it in having the best AI models in the market.

    Deprived of built-in distribution through Apple’s customer base, OpenAI may find it harder to grow its own user base. The company currently boasts more than 800 million weekly users, but recent reports suggest that the rate of usage may be slowing. OpenAI CEO Sam Altman has noted that many people currently see ChatGPT as synonymous with AI. But that perception could fray if Apple users find delight in using Gemini through Siri and come to see Gemini as the better model.
    .
    Altman told reporters last month that he sees Apple as his company’s primary long-term rival. OpenAI is in the process of developing a new kind of AI device, with help from Apple’s former chief designer Jony Ive, that Altman hopes will rival the phone as the primary way consumers interface with AI assistants. That device may debut this year. As long as Apple was dependent on ChatGPT to power Siri, OpenAI had a good view into the capabilities its new device would be competing against. OpenAI is unlikely to have as much insight into Apple’s AI capabilities going forward, which may make it harder for the upstart to position its new device as an iPhone killer.

    OpenAI has to hope its new device is a hit that may enable it to cement users into a closed ecosystem, not dissimilar to the one Apple has built around its hardware device and iOS software. This “walled garden” approach is one way to keep users from switching to rival products when they offer broadly similar capabilities. OpenAI will also have to hope its AI researchers achieve breakthroughs that give it a more decisive and long-lasting edge over Google. That might convince Apple to rely more heavily on OpenAI again in the future. Or, it could obviate the need for OpenAI to have distribution on Apple’s devices at all.

    This story was originally featured on Fortune.com

    [ad_2]

    Jeremy Kahn, Beatrice Nolan

    Source link

  • Google limits free Nano Banana Pro image generation usage due to ‘high demand’

    [ad_1]

    If you were hoping to create some silly images this long holiday weekend with Google’s new Nano Banana Pro model, I have some bad news: the company is restricting free usage of the AI system. In a support document spotted by 9to5Google, Google notes free users can currently generate two images daily, down from three per day previously. “Image generation and editing is in high demand,” the company writes. “Limits may change frequently and will reset daily.”

    It would appear Google is also limiting free Gemini 3 Pro usage, with the document stating non-paying users will get “basic access — daily limits may change frequently” as well. When the company first began rolling out Gemini 3 Pro on November 18, it guaranteed five free prompts per day. That was in line with Gemini 2.5 Pro. If you pay for either Google AI Pro or AI Ultra plan, your usage limits have not changed. They remain at 100 and 500 prompts per day, respectively.

    Google isn’t the first company to enforce stricter usage following a popular release. You may recall OpenAI delayed rolling out ChatGPT’s built-in image generator to free users after the feature turned out to be more popular than anticipated. However, OpenAI eventually brought image generation to free users.

    [ad_2]

    Igor Bonifacic

    Source link

  • Have No Fear, Google Has Plans to Enshittify AI Search With Ads, Too

    [ad_1]

    Google’s AI Overview has somehow successfully managed to get people to be less engaged with search results, clicking through less often, and less likely to fact-check the information presented to them. So you know what that means: it’s time to monetize! According to a report from Search Engine Land, Google is planning to introduce advertisements inside its AI experiences.

    Per the report, Google Vice President of Search, Robbie Stein, said that he doesn’t see advertisements going away any time soon, and in fact expects that they will evolve to integrate into AI tools. Stein said the company has already “started some experiments on ads within AI Mode and within Google AI experiences,” and expects that “new and novel ad formats” will be introduced in the future so advertisers can continue to target users and give Google money for the right to do so.

    What are those “new and novel” formats, exactly? Stein floated one example of a person searching for information during a home remodel, in which a person could provide information to the AI-powered search, and it “could give even more fine-tuned recommendations or potential other services that you could consider, or deals that could be more useful to you.” So like…personalized and sponsored advertisements, but spit out by AI, apparently? It’s not entirely clear what is novel about that, other than the fact that the person searching will likely be less discerning about what sort of paid placements they are being exposed to. Of course, there’s always the possibility that your favored chatbot will collect even more detailed data about you, but that’s not really an innovation on the ad side of things.

    For now, the company insists that it’s focused on building “consumer products first and foremost,” but is obviously thinking about how to turn a profit on this thing that it has invested billions of dollars into developing. Stein also claimed that, for the time being, AI recommendations include “organic” results first and aren’t driven by ad inputs. Keep tabs on that to see how long that lasts.

    Gizmodo reached out to Google for comment but did not receive a response at the time of publication.

    Google is far from the only company looking to figure out how to work advertisements into AI. Earlier this year, Netflix floated the idea that it might use generative AI to create advertisements that would play between shows for users on an ad-supported tier. Thus far, though, most of these efforts seem decidedly run-of-the-mill in terms of innovation. Maybe there just aren’t that many new ways to put products in front of people’s faces. The reality is that what Google is selling here isn’t necessarily a better experience for consumers—it’s just trying to reassure advertisers that, for all of its talk that everything is about to change, some things will decidedly stay the same.

    [ad_2]

    AJ Dellinger

    Source link

  • Google brings free Gemini access to India’s largest carrier

    [ad_1]

    Google’s AI ambitions are global in scale, so much so that it has just agreed to give Gemini away for free in India to people using the country’s biggest mobile provider. Thanks to a deal with Reliance Intelligence, an AI-focused subsidiary of Reliance Industries, people signed up to Jio’s Unlimited 5G plan will be offered Google AI Pro at no extra cost for 18 months.

    That means that qualifying users will have access to Gemini 2.5 Pro, Google’s most AI model. They will also benefit from higher limits for the Nano Banana and Veo 3.1 AI image and video generators, plus expanded access to NotebookLM. The plan also includes 2TB of cloud storage across Google’s apps, for a total combined worth of around 35,100 rupees ($396) per user.

    The offer will initially be exclusive to Jio customers between the age of 18 and 25, but will eventually extend to all people on an eligible plan via the MyJio app. Jio is India’s largest mobile network operator, and a company in which Google a 7.7 percent stake worth $4.5 million in 2020.

    India is fast becoming a key battleground for AI expansion. Back in July, Perplexity AI with Bharti Airtel, Jio’s rival carrier, to offer a year-long Perplexity Pro subscription worth $200 to all of Airtel’s 360 million customers. OpenAI is also adopting an aggressive strategy in the country, recently its cheapest ChatGPT subscription to date, at 390 rupee ($4.60), in India first. ChatGPT Go offers users 10 times more message limits, image generation and file uploads than the free version.

    [ad_2]

    Matt Tate

    Source link

  • Google’s AI Weather Model Nailed Its First Major Storm Forecast

    [ad_1]

    While generative AI tools that primarily amount to slop generators grab most of the attention in the artificial intelligence space, there are occasionally some actually useful applications of the technology, like Google DeepMind’s use of AI weather models to predict cyclones. The experimental tool, launched earlier this year, successfully managed to provide accurate modeling of Hurricane Erin as it started gaining steam in the Atlantic Ocean earlier this month.

    As Ars Technica first reported, Hurricane Erin—which reached Category 5 status and caused some damage to the island of Bermuda, parts of the Caribbean, and the East Coast of the United States—provided Google DeepMind’s Weather Lab with the first real test of its capabilities.

    According to James Franklin, former chief of the hurricane specialist unit at the National Hurricane Center, it did quite well, outperforming the National Hurricane Center’s official model and topping several other physics-based models during the first 72 hours of modeling. It did ultimately fall off a bit the longer the prediction effort ran, but it still topped the consensus model through the five-day forecast.

    While Google’s model was impressively accurate in the first days of modeling, it’s the latter ones that are most important to experts, per Ars Technica, as days three through five of the model are the ones that officials count on to make decisions on calls for evacuation and other preparatory efforts. Still, it seems like there may be some promise in the possibility of AI-powered weather modeling—though the sample size here is pretty small.

    Most of the current gold standard modeling techniques used for storm prediction use physics-based prediction engines, which essentially try to recreate the conditions of the atmosphere by factoring in things like humidity, air pressure, and temperature changes to simulate how a storm might behave. Google’s model instead pulls from a massive amount of data that it was trained on, including a “reanalysis dataset that reconstructs past weather over the entire Earth from millions of observations, and a specialized database containing key information about the track, intensity, size and wind radii of nearly 5,000 observed cyclones from the past 45 years.”

    According to Google, it tested its model on storms from 2023 and 2024, and found that its five-day prediction managed to predict the path of a storm with more accuracy than most other models, coming about 140km or 90 miles closer to the ultimate location of the cyclone than the European Centre for Medium-Range Weather Forecasts’ ensemble model, which is considered the most accurate model available. Now it can point to a storm that it tracked in real-time as proof of concept, though there is no reason to think AI tools like this will completely displace the other approaches at this stage.

    [ad_2]

    AJ Dellinger

    Source link

  • Google Drive now offers in-browser video editing

    [ad_1]

    Google is now offering a way to edit videos right in Drive via Google Vids in a compatible browser. Whenever you’re previewing a video in Google Drive, you may see an “Open” button in the top right of the screen. Clicking this opens the clip in Google Vids, where you can trim the video, add text and music and make other changes. Veo is available in the app too.

    After you open a file in Vids, a new file is created, and you’ll have to save or export that if need be. Google for Education has a free course detailing how to use Vids.

    In general, it seems like a useful way to edit videos that you’ve already uploaded to Drive, but there are some caveats here. For one thing, it’s restricted to paid users, including those on various Workspace business and enterprise plans, nonprofits and those with the Gemini Education or Gemini Education Premium add-ons. Google AI Pro and Ultra users will get access as well. So too will anyone who bought the Gemini Business and Enterprise add-ons before Google discontinued them earlier this year. Vids will be enabled for supported organizations by default unless they’ve opted to block access to Google Docs.

    This Google Vids feature works on the latest couple of versions of Chrome, Firefox and Microsoft Edge (Windows only). Compatibility on other browsers may vary. MP4, Quicktime, OGG and WebM videos are supported, with individual clips having limits of 35 minutes of runtime and a file size of 4GB.

    [ad_2]

    Kris Holt

    Source link

  • Google’s ex-CEO blames remote working on the company’s AI woes

    Google’s ex-CEO blames remote working on the company’s AI woes

    [ad_1]

    Google’s former CEO Eric Schmidt has a complaint about his old stomping ground—and it’s one that workers have heard on repeat for the past two years: They aren’t working in the office enough. 

    Schmidt, who left Google for good in 2020, blasted the company’s working-from-home policy during a recent talk at Stanford University, while claiming it’s the reason why the search engine giant is lagging behind in the AI race. 

    “Google decided that work-life balance and going home early and working from home was more important than winning,” Schmidt told Stanford students.

    “And the reason startups work is because the people work like hell.”

    https://www.youtube.com/watch?v=LxDM8io4lUA

    “I’m sorry to be so blunt,” Schmidt continued in the video posted on Stanford’s YouTube channel on Tuesday. “But the fact of the matter is, if you all leave the university and go found a company, you’re not gonna let people work from home and only come in one day a week if you want to compete against the other startups.”

    Schmidt made the remarks in response to a question from professor Erik Brynjolfsson about how Google have lost the lead in AI to startups like OpenAI and Anthropic.

    “I asked [Google CEO] Sundar [Pichai] this, he didn’t really give me a very sharp answer. Maybe you have a sharper or a more objective explanation for what’s going on there,” Brynjolfsson posed to the former Google boss.

    Fortune has contacted Schmidt and Google for comment.

    WFH became the norm at Google after Schmidt left

    Schmidt, who led Google from 2001 to 2011, before handing the reins back to the search giant’s co-founder Larry Page, stayed on as Google’s executive chairman and technical advisor until 2020. 

    Since then, the world of work has undergone a significant transformation. Despite the dangers of the pandemic being long behind us, companies are largely still operating remotely—at least for part of the week. 

    In fact, a study from KPMG recently revealed that CEOs who believe office workers will be back at their desks five days a week in the near future are now in the small minority. 

    It’s worth highlighting that Schmidt’s one-day-a-week remark is an exaggeration: Like most firms, Google has asked workers to come into offices around three days a week, per the company’s 2022 Diversity Annual Report.

    More recently, Google has even begun formally tracking office badge swipes and using it as a metric in performance reviews.

    However, Schmidt should note that employee backlash from rigid return-to-office mandates could actually wipe out any productivity gains in Google’s AI department.

    WFH, RTO and productivity

    Schmidt’s not the first leader to complain that working from home kills innovation.

    However, CEOs who order their staff to work from an office five days à la pre-pandemic risk having fewer staff around to innovate.

    Reams of research suggest that workers would quit their jobs if forced to return to their company’s vertical towers.

    Meanwhile, leaders who have already enforced an RTO mandate have admitted they experienced more attrition than they anticipated and are struggling with recruitment. 

    Elon Musk, for one, has been an outspoken advocate for in-office work—he quickly found out that employees will call their bosses ultimatum to commute to work or find another job.

    Twitter’s (now X) operations were put at risk soon after he took over when more workers than expected chose to quit rather than answer Musk’s call to go “hardcore”.

    Plus, even if employees don’t quit in anger, they’ll likely have less zing for their jobs: A staggering 99% of companies with RTO mandates have seen a drop in engagement. 

    Either way, Google’s lack of innovation in the AI department can’t be down to staff working from home more than those at OpenAI—they have the same 3-day in-office policy.

    Recommended Newsletter: The Fortune Next to Lead newsletter is a must-read for the next generation of C-suite leaders. Every Monday, the newsletter provides the strategies, resources, and expert insight needed to claim the most coveted positions in business. Subscribe now.

    [ad_2]

    Orianna Rosa Royle

    Source link

  • Meta and Google announce new in-house AI chips, creating a “trillion-dollar question” for Nvidia

    Meta and Google announce new in-house AI chips, creating a “trillion-dollar question” for Nvidia

    [ad_1]

    Hardware is emerging as a key AI growth area. For Big Tech companies with the money and talent to do so, developing in-house chips helps reduce dependence on outside designers such as Nvidia and Intel while also allowing firms to tailor their hardware specifically to their own AI models, boosting performance and saving on energy costs.

    These in-house AI chips that Google and Meta just announced pose one of the first real challenges to Nvidia’s dominant position in the AI hardware market. Nvidia controls more than 90% of the AI chips market, and demand for its industry-leading semiconductors is only increasing. But if Nvidia’s biggest customers start making their own chips instead, its soaring share price, up 87% since the start of the year, could suffer.

    “From Meta’s point of view … it gives them a bargaining tool with Nvidia,” Edward Wilford, an analyst at tech consultancy Omdia, told Fortune. “It lets Nvidia know that they’re not exclusive, [and] that they have other options. It’s hardware optimized for the AI that they are developing.”

    Why does AI need new chips? 

    AI models require massive amounts of computing power because of the huge amount of data required to train the large language models behind them. Conventional computer chips simply aren’t capable of processing the trillions of data points AI models are built upon, which has spawned a market for AI-specific computer chips, often called “cutting-edge” chips because they’re the most powerful devices on the market. 

    Semiconductor giant Nvidia has dominated this nascent market: The wait list for Nvidia’s $30,000 flagship AI chip is months long, and demand has pushed the firm’s share price up almost 90% in the past six months. 

    And rival chipmaker Intel is fighting to stay competitive. It just released its Gaudi 3 AI chip to compete directly with Nvidia. AI developers—from Google and Microsoft down to small startups—are all competing for scarce AI chips, limited by manufacturing capacity. 

    Why are tech companies starting to make their own chips?

    Both Nvidia and Intel can produce only a limited number of chips because they and the rest of the industry rely on Taiwanese manufacturer TSMC to actually assemble their chip designs. With only one manufacturer solidly in the game, the manufacturing lead time for these cutting-edge chips is multiple months. That’s a key factor that led major players in the AI space, such as Google and Meta, to resort to designing their own chips. Alvin Nguyen, a senior analyst at consulting firm Forrester, told Fortune that chips designed by the likes of Google, Meta, and Amazon won’t be as powerful as Nvidia’s top-of-the-line offerings—but that could benefit the companies in terms of speed. They’ll be able to produce them on less specialized assembly lines with shorter wait times, he said.

    “If you have something that’s 10% less powerful but you can get it now, I’m buying that every day,” Nguyen said.

    Even if the native AI chips Meta and Google are developing are less powerful than Nvidia’s cutting-edge AI chips, they could be better tailored to the company’s specific AI platforms. Ngyuen said that in-house chips designed for a company’s own AI platform could be more efficient and save on costs by eliminating unnecessary functions. 

    “It’s like buying a car. Okay, you need an automatic transmission. But do you need the leather seats, or the heated massage seats?” Ngyuen said.

    “The benefit for us is that we can build a chip that can handle our specific workloads more efficiently,” Melanie Roe, a Meta spokesperson, wrote in an email to Fortune.

    Nvidia’s top-of-the-line chips sell for about $25,000 apiece. They’re extremely powerful tools, and they’re designed to be good at a wide range of applications, from training AI chatbots to generating images to developing recommendation algorithms such as the ones on TikTok and Instagram. That means a slightly less powerful, but more tailored chip could be a better fit for a company such as Meta, for example—which has invested in AI primarily for its recommendation algorithms, not consumer-facing chatbots.

    “The Nvidia GPUs are excellent in AI data centers, but they are general purpose,” Brian Colello, equity research lead at investment research firm Morningstar, told Fortune. “There are likely certain workloads and certain models where a custom chip might be even better.”

    The trillion-dollar question

    Ngyuen said that more specialized in-house chips could have added benefits by virtue of their ability to integrate into existing data centers. Nvidia chips consume a lot of power, and they give off a lot of heat and noise—so much so that tech companies may be forced to redesign or move their data centers to integrate soundproofing and liquid cooling. Less powerful native chips, which consume less energy and release less heat, could solve that problem.

    AI chips developed by Meta and Google are long-term bets. Ngyuen estimated that these chips took roughly a year and a half to develop, and it’ll likely be months before they’re implemented at a large scale. For the foreseeable future, the entire AI world will continue to depend heavily on Nvidia (and, to a lesser extent, Intel) for its computing hardware needs. Indeed, Mark Zuckerberg recently announced that Meta was on track to own 350,000 Nvidia chips by the end of this year (the company’s set to spend around $18 billion on chips by then.) But movement away from outsourcing computing power and toward native chip design could loosen Nvidia’s chokehold on the market.

    “The trillion-dollar question for Nvidia’s valuation is the threat of these in-house chips,” Colello said. “If these in-house chips significantly reduce the reliance on Nvidia, there’s probably downside to Nvidia’s stock from here. This development is not surprising, but the execution of it over the next few years is the key valuation question in our mind.”

    Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.

    [ad_2]

    Dylan Sloan

    Source link

  • Apple cancels its car, Google’s AI goes awry and Bumble stumbles | TechCrunch

    Apple cancels its car, Google’s AI goes awry and Bumble stumbles | TechCrunch

    [ad_1]

    Hello, folks, welcome to Week in Review (WiR), TechCrunch’s newsletter covering noteworthy happenings in the tech industry.

    This week, investment firm KKR announced that it would acquire VMware’s end-user computing business from Broadcom for $4 billion. As Ron explains, that business included VMware Workspace One and VMware Horizon — two remote desktop apps that had been part of VMware’s family of products.

    Elsewhere, Mistral, the French AI startup, launched a new model to rival OpenAI’s GPT-4 — and its own cheekily named chatbot dubbed Le Chat. The releases were timed with a Microsoft partnership to provide Mistral models to Microsoft’s Azure customers — and a minority investment ($16 million) from Microsoft in Mistral.

    Lots else happened. We recap it all in this edition of WiR — but first, a reminder to sign up to receive the WiR newsletter in your inbox every Saturday.

    News

    Apple car canceled: Apple has scuttled its secretive, long-running effort to build an autonomous electric car. The company is likely cutting hundreds of employees from the team, and all work on the project has stopped. It joins a list of other projects Apple has scrapped in various stages, including AirPower and a TV (not to be confused with Apple TV).

    Bumble stumbles: Bumble posted weak Q4 results showing a $32 million net loss and $273.6 million in revenue — below Wall Street expectations. To right the ship, CEO Lidiane Jones announced that 30% of Bumble’s workforce, or about 350 employees, would be let go and that Bumble would embark on an app overhaul targeted at reviving growth.

    Google’s AI goes awry: Google has apologized for an embarrassing AI blunder this week: An image-generating model that injected diversity into pictures with a farcical disregard for historical context. While the underlying issue is perfectly understandable, Google blames the model for “becoming” oversensitive.

    Bad look: Matt Mullenweg, CEO of Tumblr owner Automattic, is supposed to be on sabbatical. Instead, he argued with Tumblr users this week over a content moderation decision that sparked accusations of transphobia, Amanda reports.

    Founder forced out: A group of Byju’s investors last Friday voted to remove the edtech group’s founder and chief executive, Byju Raveendran, and separately filed an oppression and management suit against the leadership at the firm to block the recently launched rights issue.

    Funding

    GenAI ebooks: Inkitt, a self-publishing platform using AI to develop bestsellers, has raised $37 million. The startup’s app lets people self-publish stories, and then, using AI and data science, selects what it believes are the most compelling of these to tweak and subsequently distribute and sell.

    Keeping it old school: Lapse has raised $30 million for its smartphone app that has you wait for photos to be “developed” — with no chance of editing and retaking — before sharing them with a select group of friends if you choose.

    Analysis

    Techstars reckoning: Mary Ann interviewed Maëlle Gavet, CEO of the startup accelerator program Techstars, in the wake of changes to its operations that have attracted biting criticism.

    Podcasts

    On Equity, the crew talked through startup news from Microsoft and Mistral AI, Thrasio and Glean — and also covered happenings over at COTU Ventures and Zacua Ventures.

    Meanwhile, Found spotlighted Ariel Kaye, the founder of Parachute, a direct-to-consumer bedding and home goods company.

    And for Chain Reaction, TC pulled from the archives to air an earlier conversation with Jack Lu, CEO and co-founder of Magic Eden, a “community-centric” NFT marketplace.

    Bonus round

    Steeply discounted Mirai: Toyota is offering $40,000 off a 2023 Toyota Mirai Limited, a fuel-cell vehicle that retails for $66,000 — plus $15,000 in free hydrogen over six years. As Tim writes, there’s only one catch: finding the hydrogen to power it.

    [ad_2]

    Kyle Wiggers

    Source link