ReportWire

Tag: computer science and information technology

  • Pope Francis warns about AI’s dangers | CNN Business

    Pope Francis warns about AI’s dangers | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Pope Francis warned that artificial intelligence could pose a risk to society, highlighting its “disruptive possibilities and ambivalent effects” and urging those who would develop or use AI to do so responsibly.

    In a statement Tuesday, Francis alluded to the threat of algorithmic bias in technology and called on the public for vigilance “so that a logic of violence and discrimination does not take root in the production and use of such devices, at the expense of the most fragile and excluded.”

    “Injustice and inequalities fuel conflicts and antagonisms,” Francis continued. “The urgent need to orient the concept and use of artificial intelligence in a responsible way, so that it may be at the service of humanity and the protection of our common home, requires that ethical reflection be extended to the sphere of education and law.”

    Francis’s remarks dovetail with calls by some AI experts to ensure that algorithms are properly “aligned” in development to support human rights and other widely shared values. Other industry experts and policymakers have expressed concerns that AI could facilitate the spread of fraud, misinformation, cyberattacks and perhaps even the creation of biological weapons.

    Francis himself has been the subject of AI-generated deepfakes. Earlier this year, an AI-generated image of Francis wearing a white, puffy Balenciaga-inspired coat went viral.

    Tuesday’s message announced the theme for 2024’s World Day of Peace, which the Pope said would focus on AI and peace.

    “The protection of the dignity of the person,” he said, “and concern for a fraternity effectively open to the entire human family, are indispensable conditions for technological development to help contribute to the promotion of justice and peace in the world.”

    [ad_2]

    Source link

  • Tesla shares jump after Morgan Stanley predicts Dojo supercomputer could add $500 billion in market value | CNN Business

    Tesla shares jump after Morgan Stanley predicts Dojo supercomputer could add $500 billion in market value | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Tesla’s Dojo supercomputer could fuel a $500 billion jump in the electric vehicle maker’s market value, analysts at Morgan Stanley said in a note Monday.

    Shares of Tesla jumped more than 6% during early trading Monday morning, on the heels of the rosy prediction from Morgan Stanley’s team about the automaker’s supercomputing efforts. The Morgan Stanley team, lead by longtime Tesla analyst Adam Jonas, predicted that the massive drive in value could come from Dojo potentially unlocking new revenue streams through the wider adoption of robotaxis and software services.

    The analysts compared the potential of Dojo at Tesla to the “same forces that have driven” Amazon Web Services to propel Amazon’s profitability to new heights.

    “Investors have long debated whether Tesla is an auto company or a tech company. We believe it’s both, but see the biggest value driver from here being software and services revenue,” the note stated.

    Dojo, an in-house supercomputer that has been in the works at Tesla for some five years, is designed to train AI systems to complete complex tasks like assisting Tesla’s driver-assistance system Autopilot as well as help propel its “Full Self-Driving” efforts.

    The Morgan Stanley analysts see Dojo as being able to open up “new addressable markets that extend well beyond selling vehicles at a fixed price.”

    The analysts added that the latest version of Tesla’s full self-driving system (expected to be unveiled at the end of the year) and Tesla’s next AI day (expected in early 2024, but yet to be announced) will be “worth watching.”

    Shares of Tesla have doubled since the beginning of the year, but are still far off from the all-time intraday high of $414.50 hit in November 2021. The world’s most valuable carmaker had a market cap of some $788.74 billion as of the market close on Friday.

    [ad_2]

    Source link

  • How companies are embracing generative AI for employees…or not | CNN Business

    How companies are embracing generative AI for employees…or not | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Companies are struggling to deal with the rapid rise of generative AI, with some rushing to embrace the technology as workflow tools for employees while others shun it – at least for now.

    As generative artificial intelligence – the technology that underpins ChatGPT and similar tools – seeps into seemingly every corner of the internet, large corporations are grappling with whether the increased efficiency it offers outweighs possible copyright and security risks. Some companies are enacting internal bans on generative AI tools as they work to better understand the technology, and others have already begun to introduce the trendy tech to employees in their own ways.

    Many prominent companies have entirely blocked internal ChatGPT use, including JPMorgan Chase, Northrup Grumman, Apple, Verizon, Spotify and Accenture, according to AI content detector Originality.AI, with several citing privacy and security concerns. Business leaders have also expressed worries about employees dropping proprietary information into ChatGPT and having that sensitive information potentially emerge as an output by the tool elsewhere.

    When users input information into these tools, “[y]ou don’t know how it’s then going to be used,” Mark McCreary, the co-chair of the privacy and data security practice at law firm Fox Rothschild LLP, told CNN in March. “That raises particularly high concerns for companies. As more and more employees casually adopt these tools to help with work emails or meeting notes, McCreary said, “I think the opportunity for company trade secrets to get dropped into these different various AI’s is just going to increase.”

    But the corporate hesitancy to welcome generative AI could be temporary.

    “Companies that are on the list of banning generative AI also have working groups internally that are exploring the usage of AI,” Jonathan Gillham, CEO of Originality.AI, told CNN, highlighting how companies in more risk-averse industries have been quicker to take action against the tech while figuring out the best approach for responsible usage. “Giving all of their staff access to ChatGPT and saying ‘have fun’ is too much of an uncontrolled risk for them to take, but it doesn’t mean that they’re not saying, ‘holy crap, look at the 10x, 100x efficiency that we can lock when we find out how to do this in a way that makes all the stakeholders happy” in departments such as legal, finance and accounting.

    Among media companies that produce news, Insider editor-in-chief Nicholas Carlson has encouraged reporters to find ways to use AI in the newsroom. “A tsunami is coming,” he said in April. “We can either ride it or get wiped out by it. But it’s going to be really fun to ride it, and it’s going to make us faster and better.” The organization discouraged staff from putting source details and other sensitive information into ChatGPT. Newspaper chain Gannett paused the use of an artificial intelligence tool to write high school sports stories after the technology called LedeAI made several mistakes in sports stories published in The Columbus Dispatch newspaper in August.

    Of the companies currently banning ChatGPT, some are discussing future usage once security concerns are addressed. UBS estimated that ChatGPT reached 100 million monthly active users in January, just two months after its launch.

    That rapid growth initially left large companies scrambling to find ways to integrate it responsibly. That process is slow for large companies. Meanwhile, website visits to ChatGPT dropped for the third month in a row in August, creating pressure for large tech companies to sustain popular interest in the tools and to find new enterprise applications and revenue models for generative AI products.

    “We at JPMorgan Chase will not roll out genAI until we can mitigate all of the risks,” Larry Feinsmith, JPM’s head of global tech strategy, innovation, and partnerships said at the Databricks Data + AI Summit in June. “We’re excited, we’re working through those risks as we speak, but we won’t roll it out until we can do this in an entirely responsible manner, and it’s going to take time.” Northrop Grumman said it doesn’t allow internal data on external platforms “until those tools are fully vetted,” according to a March report from the Wall Street Journal. Verizon also told employees in a public address in February that ChatGPT is banned “[a]s it currently stands” due to security risks but that the company wants to “safely embrace emerging technology.”

    “They’re not just waiting to sort things out. I think they’re actively working on integrating AI into their business processes separately, but they’re just doing so in a way that doesn’t compromise their information,” Vern Glaser, Associate Professor of Entrepreneurship and Family Enterprise at the University of Alberta, told CNN. “What you’ll see with a lot of the companies that will be using AI strategies, particularly those who have their own unique content, they’re going to end up creating their custom version of generative AI.”

    Several companies – and even ChatGPT itself – seem to have already found their own answers to the corporate world’s genAI security dilemma.

    Walmart introduced an internal “My Assistant” tool for 50,000 corporate employees that helps with repetitive tasks and creative ideas, according to an August LinkedIn post from Cheryl Ainoa, Walmart’s EVP of New Businesses and Emerging Technologies, and Donna Morris, Chief People Officer. The tool is intended to boost productivity and eventually help with new worker orientation, according to the post.

    Consulting giants McKinsey, PwC and EY are also welcoming genAI through internal, private methods. PwC announced a “Generative AI factory” and launched its own “ChatPwC” tool in August powered by OpenAI tech to help employees with tax questions and regulations as part of a $1 billion investment for AI capability scaling.

    McKinsey introduced “Lilli” in August, a genAI solution where employees can pose questions, with the system then aggregating all of the firm’s knowledge and scanning the data to identify relevant “With Lilli, we can use technology to access and leverage our entire body of knowledge and assets to drive new levels of productivity,” Jacky Wright, a McKinsey senior partner and chief technology and platform officer, wrote in the announcement. content, summarize the main points and offer experts.

    EY is investing $1.4 billion in the technology, including “EY.ai EYQ,” an in-house large language model, and AI training for employees, according to a September press release

    Tools like MyAssistant, ChatPwC and Lilli solve some of the corporate concerns surrounding genAI systems through custom adaptions of genAI tech, offering employees a private, closed alternative that both capitalizes its ability to increase efficiency and eliminates the risk of copyright or security leaks.

    The launch of ChatGPT Enterprise may also help quell some fears. The new version of OpenAI’s new tool, announced in August, is specifically for businesses, promising to provide “enterprise-grade security and privacy” combined with “the most powerful version of ChatGPT yet” for businesses looking to jump on the generative AI bandwagon, according to a company blog post.

    The highly-anticipated announcement from OpenAI comes as the company says employees from over 80% of Fortune 500 companies have already begun using ChatGPT since it launched publicly late last year, according to its analysis of accounts associated with corporate email domains.

    In response to the concerns raised by many companies over security, about employees dropping proprietary information into ChatGPT and having that sensitive information potentially emerge as an output by the tool elsewhere, OpenAI’s announcement blog post for ChatGPT Enterprise states that it does “not train on your business data or conversations, and our models don’t learn from your usage.”

    In July, Microsoft unveiled a business-specific version of its AI-powered Bing tool, dubbed Bing Chat Enterprise, and promised much of the same security assurances that ChatGPT Enterprise is now touting – namely, that users’ chat data will not be used to train AI models.

    It is still unclear whether the new tools will be enough to convince corporate America that it is time to fully embrace generative AI, though experts agree the tech’s inevitable entry into the workplace will take time and strategy.

    “I don’t think it’s that companies are against AI and against machine learning, per se. I think most companies are going to be trying to use this type of technology, but they have to be careful with it because of the impacts on intellectual property,” Glaser said.

    [ad_2]

    Source link

  • SoftBank CEO says artificial general intelligence will come within 10 years | CNN Business

    SoftBank CEO says artificial general intelligence will come within 10 years | CNN Business

    [ad_1]


    Tokyo
    Reuters
     — 

    SoftBank CEO Masayoshi Son said he believes artificial general intelligence (AGI), artificial intelligence that surpasses human intelligence in almost all areas, will be realized within 10 years.

    Speaking at the SoftBank World corporate conference, Son said he believes AGI will be ten times more intelligent than the sum total of all human intelligence. He noted the rapid progress in generative AI that he said has already exceeded human intelligence in certain areas.

    “It is wrong to say that AI cannot be smarter than humans as it is created by humans,” he said. “AI is now self learning, self training, and self inferencing, just like human beings.”

    Son has spoken of the potential of AGI — typically using the term “singularity” — to transform business and society for some years, but this is the first time he has given a timeline for its development.

    He also introduced the idea of “Artificial Super Intelligence” at the conference which he claimed would be realized in 20 years and would surpass human intelligence by a factor of 10,000.

    Son is known for several canny bets that have turned SoftBank into a tech investment giant as well as some bets that have spectacularly flopped.

    He’s also prone to making strident claims about the transformative impact of new technologies. His predictions about the mobile internet have been largely borne out while those about the Internet of Things have not.

    Son called upon Japanese companies to “wake up” to the promise of AI, arguing they had increasingly fallen behind in the internet age and reiterated his belief in chip designer Arm as core to the “AI revolution.”

    Arm CEO Rene Haas, speaking at the conference via video, touted the energy efficiency of Arm’s designs, saying they would become increasingly sought after to power artificial intelligence.

    Son said he thinks he is the only person who believes AGI will come within a decade. Haas said he thought it would come in his lifetime.

    [ad_2]

    Source link

  • An author says AI is ‘writing’ unauthorized books being sold under her name on Amazon | CNN Business

    An author says AI is ‘writing’ unauthorized books being sold under her name on Amazon | CNN Business

    [ad_1]


    New York
    CNN
     — 

    An author is raising alarms this week after she found new books being sold on Amazon under her name — only she didn’t write them; they appear to have been generated by artificial intelligence.

    Jane Friedman, who has authored multiple books and consulted about working in the writing and publishing industry, told CNN that an eagle-eyed reader looking for more of her work bought one of the fake titles on Amazon. The books had titles similar to the subjects she typically writes about, but the text read as if someone had used a generative AI model to imitate her style.

    “When I started looking at these books, looking at the opening pages, looking at the bio, it was just obvious to me that it had been mostly, if not entirely, AI-generated … I have so much content available online for free, because I’ve been blogging forever, so it wouldn’t be hard to get an AI to mimic me” Friedman said.

    With AI tools like ChatGPT now able to rapidly and cheaply pump out huge volumes of convincing text, some writers and authors have raised alarms about losing work to the new technology. Others have said they don’t want their work being used to train AI models, which could then be used to imitate them.

    “Generative AI is being used to replace writers — taking their work without permission, incorporating those works into the fabric of those AI models and then offering those AI models to the public, to other companies, to use to replace writers,” Mary Rasenberger, CEO of the nonprofit authors advocacy group the Authors Guild, told CNN. “So you can imagine writers are a little upset about that.”

    Last month, US lawmakers met with members of creative industries, including the Authors Guild, to discuss the implications of artificial intelligence. In a Senate subcommittee hearing, Rasenberger called for the creation of legislation to protect writers from AI, including rules that would require AI companies to be transparent about how they train their models. More than 10,000 authors — including James Patterson, Roxane Gay and Margaret Atwood — also signed an open letter calling on AI industry leaders like Microsoft and ChatGPT-maker OpenAI to obtain consent from authors when using their work to train AI models, and to compensate them fairly when they do.

    Friedman on Monday posted a well-read thread on X, formerly known as Twitter, and a blog post about the issue. Several authors responded saying they’d had similar experiences.

    “People keep telling me they bought my newest book — that has my name on it but I didn’t write,” one author said in response.

    Amazon removed the fake books being sold under Friedman’s name and said its policies prohibit such imitation.

    “We have clear content guidelines governing which books can be listed for sale and promptly investigate any book when a concern is raised,” Amazon spokesperson Ashley Vanicek said in a statement, adding that the company accepts author feedback about potential issues. “We invest heavily to provide a trustworthy shopping experience and protect customers and authors from misuse of our service.”

    Amazon also told Friedman that it is “investigating what happened with the handling of your claims to drive improvements to our processes,” according to an email viewed by CNN.

    The fake books using Friedman’s name were also added to her profile on the literary social network Goodreads, and removed only after she publicized the issue.

    “We have clear guidelines on which books are included on Goodreads and will quickly investigate when a concern is raised, removing books when we need to,” Goodreads spokesperson Suzanne Skyvara said in a statement to CNN.

    Friedman said she worries that authors will be stuck playing whack-a-mole to identify AI generated fakes.

    “What’s frightening is that this can happen to anyone with a name that has reputation, status, demand that someone sees a way to profit off of,” she said.

    The Authors Guild has been working with Amazon since this past winter to address the issue of books written by AI, Rasenberger said.

    She said the company has been responsive when the Authors Guild flags fake books on behalf of authors, but it can be a tricky issue to spot given that it’s possible for two legitimate authors to have the same name.

    The group is also hoping AI companies will agree to allow authors to opt out of having their work used to train AI models — so it’s harder to create copycats — and to find ways to transparently label artificially generated text. And, she said, companies and publishers should continue investing in creative work made by humans, even if AI appears more convenient.

    “Using AI to generate content is so easy, it’s so cheap, that I do worry there’s going to be this kind of downward competition to use AI to replace human creators,” she said. “And you will never get the same quality with AI as human creators.”

    [ad_2]

    Source link

  • AI tools make things up a lot, and that’s a huge problem | CNN Business

    AI tools make things up a lot, and that’s a huge problem | CNN Business

    [ad_1]



    CNN
     — 

    Before artificial intelligence can take over the world, it has to solve one problem. The bots are hallucinating.

    AI-powered tools like ChatGPT have mesmerized us with their ability to produce authoritative, human-sounding responses to seemingly any prompt. But as more people turn to this buzzy technology for things like homework help, workplace research, or health inquiries, one of its biggest pitfalls is becoming increasingly apparent: AI models often just make things up.

    Researchers have come to refer to this tendency of AI models to spew inaccurate information as “hallucinations,” or even “confabulations,” as Meta’s AI chief said in a tweet. Some social media users, meanwhile, simply blast chatbots as “pathological liars.”

    But all of these descriptors stem from our all-too-human tendency to anthropomorphize the actions of machines, according to Suresh Venkatasubramanian, a professor at Brown University who helped co-author the White House’s Blueprint for an AI Bill of Rights.

    The reality, Venkatasubramanian said, is that large language models — the technology underpinning AI tools like ChatGPT — are simply trained to “produce a plausible sounding answer” to user prompts. “So, in that sense, any plausible-sounding answer, whether it’s accurate or factual or made up or not, is a reasonable answer, and that’s what it produces,” he said. “There is no knowledge of truth there.”

    The AI researcher said that a better behavioral analogy than hallucinating or lying, which carries connotations of something being wrong or having ill-intent, would be comparing these computer outputs to the way his young son would tell stories at age four. “You only have to say, ‘And then what happened?’ and he would just continue producing more stories,” Venkatasubramanian said. “And he would just go on and on.”

    Companies behind AI chatbots have put some guardrails in place that aim to prevent the worst of these hallucinations. But despite the global hype around generative AI, many in the field remain torn about whether or not chatbot hallucinations are even a solvable problem

    Simply put, a hallucination refers to when an AI model “starts to make up stuff — stuff that is not in-line with reality,” according to Jevin West, a professor at the University of Washington and co-founder of its Center for an Informed Public.

    “But it does it with pure confidence,” West added, “and it does it with the same confidence that it would if you asked a very simple question like, ‘What’s the capital of the United States?’”

    This means that it can be hard for users to discern what’s true or not if they’re asking a chatbot something they don’t already know the answer to, West said.

    A number of high-profile hallucinations from AI tools have already made headlines. When Google first unveiled a demo of Bard, its highly anticipated competitor to ChatGPT, the tool very publicly came up with a wrong answer in response to a question about new discoveries made by the James Webb Space Telescope. (A Google spokesperson at the time told CNN that the incident “highlights the importance of a rigorous testing process,” and said the company was working to “make sure Bard’s responses meet a high bar for quality, safety and groundedness in real-world information.”)

    A veteran New York lawyer also landed in hot water when he used ChatGPT for legal research, and submitted a brief that included six “bogus” cases that the chatbot appears to have simply made up. News outlet CNET was also forced to issue corrections after an article generated by an AI tool ended up giving wildly inaccurate personal finance advice when it was asked to explain how compound interest works.

    Cracking down on AI hallucinations, however, could limit AI tools’ ability to help people with more creative endeavors — like users that are asking ChatGPT to write poetry or song lyrics.

    But there are risks stemming from hallucinations when people are turning to this technology to look for answers that could impact their health, their voting behavior, and other potentially sensitive topics, West told CNN.

    Venkatasubramanian added that at present, relying on these tools for any task where you need factual or reliable information that you cannot immediately verify yourself could be problematic. And there are other potential harms lurking as this technology spreads, he said, like companies using AI tools to summarize candidates’ qualifications and decide who should move ahead to the next round of a job interview.

    Venkatasubramanian said that ultimately, he thinks these tools “shouldn’t be used in places where people are going to be materially impacted. At least not yet.”

    How to prevent or fix AI hallucinations is a “point of active research,” Venkatasubramanian said, but at present is very complicated.

    Large language models are trained on gargantuan datasets, and there are multiple stages that go into how an AI model is trained to generate a response to a user prompt — some of that process being automatic, and some of the process influenced by human intervention.

    “These models are so complex, and so intricate,” Venkatasubramanian said, but because of this, “they’re also very fragile.” This means that very small changes in inputs can have “changes in the output that are quite dramatic.”

    “And that’s just the nature of the beast, if something is that sensitive and that complicated, that comes along with it,” he added. “Which means trying to identify the ways in which things can go awry is very hard, because there’s so many small things that can go wrong.”

    West, of the University of Washington, echoed his sentiments, saying, “The problem is, we can’t reverse-engineer hallucinations coming from these chatbots.”

    “It might just an intrinsic characteristic of these things that will always be there,” West said.

    Google’s Bard and OpenAI’s ChatGPT both attempt to be transparent with users from the get-go that the tools may produce inaccurate responses. And the companies have expressed that they’re working on solutions.

    Earlier this year, Google CEO Sundar Pichai said in an interview with CBS’ “60 Minutes” that “no one in the field has yet solved the hallucination problems,” and “all models have this as an issue.” On whether it was a solvable problem, Pichai said, “It’s a matter of intense debate. I think we’ll make progress.”

    And Sam Altman, CEO of ChatGPT-maker OpenAI, made a tech prediction by saying he thinks it will take a year-and-a-half or two years to “get the hallucination problem to a much, much better place,” during remarks in June at India’s Indraprastha Institute of Information Technology, Delhi. “There is a balance between creativity and perfect accuracy,” he added. “And the model will need to learn when you want one or the other.”

    In response to a follow-up question on using ChatGPT for research, however, the chief executive quipped: “I probably trust the answers that come out of ChatGPT the least of anybody on Earth.”

    [ad_2]

    Source link

  • Meta’s Threads is temporarily blocking searches about Covid-19 | CNN Business

    Meta’s Threads is temporarily blocking searches about Covid-19 | CNN Business

    [ad_1]



    CNN
     — 

    Threads, the much-hyped social media app from Facebook-parent Meta, is taking heat for blocking searches for “coronavirus,” “Covid,” and other pandemic-related queries.

    The tech giant’s decision to block coronavirus-related searches on its service comes as the United States deals with a recent uptick in Covid-19 hospitalizations, per CDC data, and more than three years into the global pandemic.

    News of Threads blocking searches related to the coronavirus was first reported by The Washington Post.

    A Meta spokesperson told CNN that the company just began rolling out keyword search for Threads to additional countries last week.

    “The search functionality temporarily doesn’t provide results for keywords that may show potentially sensitive content,” the statement added. “People will be able to search for keywords such as ‘COVID’ in future updates once we are confident in the quality of the results.” 

    As of Monday, searches on the Threads app conducted by CNN for “coronavirus,” “Covid” and “Covid-19” yielded a blank page with the text: “No results.” Searches for “vaccine” also prompted no results. Typing any of these queries into the Threads app does, however, offer a link directing users to the CDC’s website on Covid-19 or vaccinations, depending on the search.

    Meta did not disclose what other keyword searches currently yield no results.

    Meta’s Facebook and other social media platforms faced controversy in the early part of the pandemic for the apparent spread of Covid-19-related misinformation online.

    Meta officially launched Threads in early July, and the app quickly garnered more than 100 million sign-ups in its first week on the heels of months of chaos at Twitter, which is now known as X. But much of the buzz faded somewhat in the weeks that followed as users realized the bare-bones platform still lacked many of the features that made X popular with users.

    Threads released its much-requested web version late last month, and its keyword search about a week ago. But the current limitations around its search function highlights how the platform still has some kinks to work through before it can fully replace the real-time search and engagement experience that social media users have historically relied on with X.

    –CNN’s Clare Duffy contributed to this report.

    [ad_2]

    Source link

  • US says it has no evidence that Huawei can make advanced smartphones ‘at scale’ | CNN Business

    US says it has no evidence that Huawei can make advanced smartphones ‘at scale’ | CNN Business

    [ad_1]

    Editor’s Note: Sign up for CNN’s Meanwhile in China newsletter which explores what you need to know about the country’s rise and how it impacts the world.


    Hong Kong
    CNN
     — 

    Commerce Secretary Gina Raimondo says the US government has no evidence that Huawei can produce smartphones with advanced chips “at scale,” as it continues to investigate how the sanctioned Chinese manufacturer made an apparent breakthrough with its latest flagship device.

    On Tuesday, Raimondo told US lawmakers that she was “upset” by news of the launch of Huawei’s Mate 60 Pro during her visit to China last month.

    “The only good news, if there is any, is we don’t have any evidence that they can manufacture 7-nanometer [chips] at scale,” she told a US House of Representatives hearing.

    “Although I can’t talk about any investigations specifically, I promise you this: every time we find credible evidence that any company has gone around our export controls, we do investigate.”

    Analysts who have examined the smartphone said it represented a “milestone” achievement for China, suggesting Huawei may have found a way to overcome American export controls.

    US officials have long argued that the company poses a risk to US national security, using it as grounds to restrict trade with the company. Huawei has vehemently denied the claims.

    TechInsights, a research organization that specializes in semiconductors and took the phone apart for analysis, says it includes a 5G Kirin 9000s processor developed by China’s leading chipmaker, Semiconductor Manufacturing International Corporation (SMIC).

    That surprised many because SMIC, a partially state-owned Chinese company, has also been subject to US export restrictions for years. It has not responded to previous requests for comment from CNN.

    TechInsights also found two chips belonging to SK Hynix, a South Korean chipmaker, inside the handset.

    A SK Hynix spokesperson told CNN earlier this month that it was aware of the issue and investigating how that was possible, since the South Korean firm “no longer does business with Huawei” because of US export controls.

    Huawei declined to comment on the capabilities and components of its phone.

    Raimondo said Tuesday that US officials were “trying to use every single tool at our disposal … to deny the Chinese an ability to get intellectual property to advance their technology in ways that can hurt us.”

    In 2019, Huawei was added to the US “entity list,” which restricts exports to select organizations without a US government license. The following year, the US government expanded on those curbs by seeking to cut Huawei off from chip suppliers that use US technology.

    That left the company, once the world’s second largest smartphone seller, in bad shape.

    As of the second quarter of 2023, Huawei was no longer in the top five of mobile phone vendors in China, let alone globally, according to Counterpoint Research.

    But its new phone is a big help for the company — and may pose a challenge to Apple’s (AAPL) market share in China, according to Ivan Lam, a senior analyst at Counterpoint.

    Huawei is scheduled to hold a product launch event next Monday, where new phones are expected to be the main focus, according to Toby Zhu, a Canalys mobility analyst.

    Other devices, like tablets or earphones, may also be shown off. Huawei has not publicly released details of the event.

    In the coming months, the firm plans to release another 5G phone, possibly under Nova, its mid-range lineup, Chinese news outlet IT Times reported Tuesday, citing unidentified industry sources. Huawei declined to comment.

    Zhu said the phone was widely expected to come with 5G capability, powered either by the “Kirin 9000s chip or another chip.”

    If it does, the new model could become even more popular than the Mate 60 Pro, which starts at 6,999 yuan (about $959), because of its relative affordability, he added.

    While Raimondo was unhappy with the timing of Huawei’s launch, analysts say it was unlikely to have been arranged to coincide with her presence in China.

    It was likely “a marketing campaign aimed at winning over customer interest before the iPhone 15 hits the market,” analysts at Eurasia Group wrote in a report.

    The move helped the Shenzhen-based company capture the second spot in China’s smartphone market in the first week of September, ahead of Apple’s big event, said Lam of Counterpoint.

    — Rashard Rose and Mengchen Zhang contributed to this report.

    [ad_2]

    Source link

  • Google unveils Pixel 8 built for ‘the generative AI era’ | CNN Business

    Google unveils Pixel 8 built for ‘the generative AI era’ | CNN Business

    [ad_1]



    CNN
     — 

    There’s nothing particularly new about Google’s latest-generation Pixel 8 smartphone hardware. That’s why the company is pushing hard to tout its AI-powered new software, which Google says was built specifically for the “first phone of the generative AI era.”

    At a press event in New York City, Google

    (GOOG)
    showed off the new Pixel 8 and Pixel 8 Pro devices, which largely look the same as the year prior, albeit with more rounded edges. But inside, its new G3 Tensor chip unlocks an AI-powered world aimed at simplifying your life, from asking the device to summarize news articles and websites to using Google

    (GOOG)
    Assistant to field phone calls and tweaking photos to move or resize objects.

    The 6.3-inch Pixel 8 and the 6.7-inch Pixel 8 Pro comes with a brighter display, new camera system and longer-lasting battery life. The Pixel 8 is available in three colors – hazel, rose and obsidian – and starts at $699, about $100 less than the baseline iPhone 14 with the same amount of storage. (That’s about $100 more than last year’s Pixel 7).

    Meanwhile, the Pixel 8 Pro – which touts a polished aluminum frame and a matte back glass this year – now has the ability to take better low-light photos and sharper selfies. It starts at $999 – the same price as the iPhone 15 Pro – and is available in three colors: bay, porcelain and obsidian.

    Although these upgrades are mostly incremental, the AI enhancements and related features may appeal to tech enthusiasts who want the latest version of Android and an alternative to Apple or Samsung smartphones.

    At the same time, Google’s Pixel line remains a niche product. Its global market share for smartphones remains about 1%, according to data from ABI Research. Google also limits sales to only a handful of countries, so keeping the volume low has been strategic as Google remains predominantly a software company with many partners running Android.

    Reece Hayden, an analyst at ABI Research, said Google is looking to establish itself as an early market leader amid the “generative AI-related hysteria,” which kicked into high gear late last year with the introduction of ChatGPT. Generative AI refers to a type of artificial intelligence that can create new content, such as text and images, in response to user prompts.

    “[Adding it to the Pixel] creates further product differentiation by leveraging internal capabilities that Apple may not have,” said Hayden.

    He expects this announcement to be the first of many similar efforts coming to hardware over the next year, especially among brands who’ve already made investments in this area.

    Here’s a closer look at what Google announced and some of the standout new AI features:

    A Google employee demonstrates manual focus features of the new Google Pixel 8 Pro Phone in New York City, U.S., October 4, 2023.

    Google showed off a handful of photo features coming to its Pixel line, including Magic Editor which uses generative AI to reposition and resize a subject. Similarly, a new Audio Magic Eraser tool that lets users erase distracting sounds from videos.

    Another tool called Best Take snaps a series of photos and then aggregates the faces into one shot so everyone looks their best. And a a new Zoom enhanced feature lets users pinch to zoom in about 30 times after a photo is taken to focus in on and edit a specific area.

    The company said these efforts aim to “let you capture every moment just how you want to remember it.”

    Although the tools intend to give users more control over their photos, some analysts like Thomas Husson at market research firm Forrester believe it will be harder to distinguish between what’s real and what’s not.

    “The fact that Google refers to a ‘Magic Eraser’ will blur the distinction between real photos and heavily edited ones,” Husson said. But he warns an uptick in deepfake apps already makes it hard to decipher the authenticity of some shots. “You don’t really need Google AI for that.”

    The company said Google Assistant will now sound more realistic when it engages with callers. Google’s screen call tool already lets Assistant field incoming calls, speak to callers and determine who’s on the line before pushing it through to the user. But its robotic voice will sound increasing more natural, the company said.

    Google is also bringing the capabilities of its Bard AI chatbot to Google Assistant, so it will be able to do more than set an alarm or tell the weather. With its new generative AI capabilities, it will be able to review important emails in a user’s inbox or reveal more about a hotel that popped up on their Instagram feed. Assistant will also be able to understand user questions in voice, text and images.

    “With generative AI on the scene, it’s really creating a lot of new opportunities to build an even more intuitive and intelligent and personalized digital assistant,” Sissie Hsiao, general manager for Google Assistant and Bard, told CNN.

    In addition to making Assistant more useful, the tool will make it easier for more users to interact with Google’s six-month-old Bard on interfaces they may already frequently engage with. Last month, Google rolled out a major expansion of Bard, allowing users to link the tool to their Gmail and other Google Workspace tools and making it easier to fact check the AI’s responses.

    Google launched Assistant with Bard to a small test group on Wednesday, and it will be more widely available to Android and iOS users in the coming months.

    AI is also getting smarter on the Pixel Watch 2 ($349), its second-generation smartwatch. Users can use Bard capabilities via an upgraded Google Assistant watch app to ask it how they slept and get other health insights.

    In addition, the Pixel 2 features a new heart rate sensor, which works alongside a new AI-driven heart rate algorithm, to provide a more accurate heart rate reading than before. But Hayden said he doesn’t think more AI will add too much more to its existing value proposition.

    “Smart watches already include a fair amount of AI, and Pixel is no different,” he said.

    [ad_2]

    Source link

  • Sony’s PlayStation Access controller offers a new social lifeline for gamers with disabilities | CNN Business

    Sony’s PlayStation Access controller offers a new social lifeline for gamers with disabilities | CNN Business

    [ad_1]



    CNN
     — 

    Grant Stoner said that he has loved playing video games his entire life, and that his earliest memory is of playing Super Nintendo in his parents’ bedroom at roughly 3 years old.

    “Gaming, for me, has always been a social activity,” Stoner, a Pittsburgh native who has spinal muscular atrophy type 2, a neuromuscular disorder, told CNN. “Because I’ve never really, physically, been able to participate in schoolyard events or sporting events or what have you, so I would bond with family and classmates through gaming.”

    For people with disabilities, Stoner said gaming has served as a lifeline for forming friendships and community. But for years, he adds, the technology underpinning the gaming sector has been notoriously not inclusive.

    “Disabled people would have to be very innovative, and either design or create their own adaptive setups with like, different maybe 3D-printed objects or in my case, a Popsicle stick,” Stoner told CNN. He said his brother used to attach a Popsicle stick to the trigger of one of his gaming controllers to find a way for Stoner to keep playing even when he lost strength in his fingers.

    Stoner’s struggles are all too familiar for Paul Amadeus Lane, who told CNN he learned how to play games using his chin, lips and cheeks to push buttons on a controller after an accident left him quadriplegic and without finger mobility some 30 years ago.

    Lane also recalls how his social circle changed after his accident, and isolation crept in after he could no longer do things like play basketball or go for a drive.

    “Gaming can help with those social barriers out there, especially with social isolation,” Lane told CNN. “And put us in an environment where we can have some enjoyment without being judged because of our disability.”

    Lane said he remembers getting a call back in 2021 to help advise Sony with a “secret project,” and was overjoyed to find out the tech giant’s gaming arm was quietly working on creating a controller specifically for people with disabilities. Sony Interactive Entertainment is the maker of the wildly popular PlayStation consoles and a lineup of fan-favorite PlayStation games. Microsoft’s Xbox gaming unit released an Adaptive Controller for Xbox back in 2018 to much celebration from the disability community, but people with disabilities still found wide gaps trying to play games on PlayStation or Nintendo consoles.

    Paul Amadeus Lane, an accessibility consultant working with Sony Interactive Entertainment, is pictured here with the Access controller, a Sony device specifically designed for gamers with disabilities.

    “I was really, really happy, because I didn’t think Sony would ever tackle something like this,” Lane told CNN.

    After years of tinkering and consulting with gamers who have disabilities like Lane, Sony Interactive Entertainment unveiled a first look at its Access controller for gamers with disabilities earlier this month. The Access controller is now available for pre-order and will be released on December 6, with a price tag of $89.99.

    The controller can be endlessly customized to meet the diverse needs of players with disabilities and Sony has the goal of helping these gamers play more comfortably for longer. The circular device can be configured with swappable button and stick caps to suit a range of mobility needs.

    Gamers get a first look at Sony's Access controller, a highly-customizable device designed specifically for people with disabilities, at an event in San Mateo in September.

    In a Q&A posted on Sony’s PlayStation company blog, Alvin Daniel, the senior technical program manager for the Access controller, said the development team quickly learned that “no two people experience disability in exactly the same way.”

    Daniel said his team tapped the help of players and accessibility experts to build a controller that could be as inclusive as possible. With the help of players and accessibility experts, Daniel wrote, “we did a really deep dive to try to understand what it was we wanted to help solve. And this came down to a very interesting insight: instead of looking at conditions, or impediments, instead, look at the controller.”

    “Look at the standard controller as it exists today. And ask yourself the question, ‘What prevents someone from effectively interacting with a standard controller?’” he added.

    The result is a Sony-designed device that gamers can tailor to meet their individual needs, that gamers don’t have to hold in order to use and features buttons that are much easier to press. Lane, who is among the group of gamers who has been able to try out the unreleased controller, said he was especially excited for how it gave him the ability to play racing games again for the first time since his accident.

    “I wasn’t able to play racing games because of just the dexterity that you needed with your hands and just how fast things are moving,” Lane said. “And then when I was able to try out Gran Turismo, I was like, I can game and play racing games again!”

    “I haven’t driven in over 30 years,” he added. “It takes me back to when I was driving.”

    Stoner said he’s excited about the PlayStation Access controller, and especially encouraged that the price point is relatively low compared to other options on the market. And while he’s been heartened to see an industry-wide push toward inclusive innovation in gaming, he emphasized that there is still work to do.

    “The industry needs to understand that the Xbox controller, the PlayStation controller, while they’re great and while they’re very beneficial, they cannot help everyone,” he said. “This is not a perfect solution.”

    “We need to keep innovating around games – the software aspect and the hardware aspect – because nothing that we have currently is fully accessible to every disabled person,” he added. “I don’t know if it’ll ever happen, just because of how individualistic the disabled experience is, but currently, there’s always more work to be done and the industry needs to remember that.”

    [ad_2]

    Source link

  • ‘It gave us some way to fight back’: New tools aim to protect art and images from AI’s grasp | CNN Business

    ‘It gave us some way to fight back’: New tools aim to protect art and images from AI’s grasp | CNN Business

    [ad_1]



    CNN
     — 

    For months, Eveline Fröhlich, a visual artist based in Stuttgart, Germany, has been feeling “helpless” as she watched the rise of new artificial intelligence tools that threaten to put human artists out of work.

    Adding insult to injury is the fact that many of these AI models have been trained off of the work of human artists by quietly scraping images of their artwork from the internet without consent or compensation.

    “It all felt very doom and gloomy for me,” said Fröhlich, who makes a living selling prints and illustrating book and album covers.

    “We’ve never been asked if we’re okay with our pictures being used, ever,” she added. “It was just like, ‘This is mine now, it’s on the internet, I’m going to get to use it.’ Which is ridiculous.”

    Recently, however, she learned about a tool dubbed Glaze that was developed by computer scientists at the University of Chicago and thwarts the attempts of AI models to perceive a work of art via pixel-level tweaks that are largely imperceptible to the human eye.

    “It gave us some way to fight back,” Fröhlich told CNN of Glaze’s public release. “Up until that point, many of us felt so helpless with this situation, because there wasn’t really a good way to keep ourselves safe from it, so that was really the first thing that made me personally aware that: Yes, there is a point in pushing back.”

    Fröhlich is one of a growing number of artists that is fighting back against AI’s overreach and trying to find ways to protect her images online as a new spate of tools has made it easier than ever for people to manipulate images in ways that can sow chaos or upend the livelihoods of artists.

    These powerful new tools allow users to create convincing images in just seconds by inputting simple prompts and letting generative AI do the rest. A user, for example, can ask an AI tool to create a photo of the Pope dripped out in a Balenciaga jacket — and go on to fool the internet before the truth comes out that the image is fake. Generative AI technology has also wowed users with its ability to spit out works of art in the style of a specific artist. You can, for example, create a portrait of your cat that looks like it was done with the bold brushstrokes of Vincent Van Gogh.

    But these tools also make it very easy for bad actors to steal images from your social media accounts and turn them into something they’re not (in the worst cases, this could manifest as deepfake porn that uses your likeness without your consent). And for visual artists, these tools threaten to put them out of work as AI models learn how to mimic their unique styles and generate works of art without them.

    Some researchers, however, are now fighting back and developing new ways to protect people’s photos and images from AI’s grasp.

    Ben Zhao, a professor of computer science at University of Chicago and one of the lead researchers on the Glaze project, told CNN that the tool aims to protect artists from having their unique works used to train AI models.

    Glaze uses machine-learning algorithms to essentially put an invisible cloak on artworks that will thwart AI models’ attempts to understand the images. For example, an artist can upload an image of their own oil painting that has been run through Glaze. AI models might read that painting as something like a charcoal drawing — even if humans can clearly tell that it is an oil painting.

    Artists can now take a digital image of their artwork, run it through Glaze, “and afterwards be confident that this piece of artwork will now look dramatically different to an AI model than it does to a human,” Zhao told CNN.

    Zhao’s team released the first prototype of Glaze in March and has already surpassed a million downloads of the tool, he told CNN. Just last week, his team released a free online version of the tool as well.

    Jon Lam, an artist based in California, told CNN that he now uses Glaze for all of the images of his artwork that he shares online.

    Lam said that artists like himself have for years posted the highest resolution of their works on the internet as a point of pride. “We want everyone to see how awesome it is and see all the details,” he said. But they had no idea that their works could be gobbled up by AI models that then copy their styles and put them out of work.

    Jon Lam is a visual artist from California who uses the Glaze tool to help protect his artwork online from being used to train AI models.

    “We know that people are taking our high-resolution work and they are feeding it into machines that are competing in the same space that we are working in,” he told CNN. “So now we have to be a little bit more cautious and start thinking about ways to protect ourselves.”

    While Glaze can help ameliorate some of the issues artists are facing for now, Lam says it’s not enough and there needs to be regulation set regarding how tech companies can take data from the internet for AI training.

    “Right now, we’re seeing artists kind of being the canary in the coal mine,” Lam said. “But it’s really going to affect every industry.”

    And Zhao, the computer scientist, agrees.

    Since releasing Glaze, the amount of outreach his team has received from artists in other disciplines has been “overwhelming,” he said. Voice actors, fiction writers, musicians, journalists and beyond have all reached out to his team, Zhao said, inquiring about a version of Glaze for their field.

    “Entire, multiple, human creative industries are under threat to be replaced by automated machines,” he said.

    While the rise of AI images are threatening the jobs of artists around the world, everyday internet users are also at risk of their photos being manipulated by AI in other ways.

    “We are in the era of deepfakes,” Hadi Salman, a researcher at the Massachusetts Institute of Technology, told CNN amid the proliferation of AI tools. “Anyone can now manipulate images and videos to make people actually do something that they are not doing.”

    Salman and his team at MIT released a research paper last week that unveiled another tool aimed at protecting images from AI. The prototype, dubbed PhotoGuard, puts an invisible “immunization” over images that stops AI models from being able to manipulate the picture.

    The aim of PhotoGuard is to protect photos that people upload online from “malicious manipulation by AI models,” Salman said.

    Salman explained that PhotoGuard works by adjusting an image’s pixels in a way that is imperceptible to humans.

    In this demonstration released by MIT, a researcher shows a selfie (left) he took with comedian Trevor Noah. The middle photo, an AI-generated fake image, shows how the image looks after he used an AI model to generate a realistic edit of the pair wearing suits. The right image depicts how the researchers' tool, PhotoGuard, would prevent an attempt by AI models from editing the photo.

    “But this imperceptible change is strong enough and it’s carefully crafted such that it actually breaks any attempts to manipulate this image by these AI models,” he added.

    This means that if someone tries to edit the photo with AI models after it’s been immunized by PhotoGuard, the results will be “not realistic at all,” according to Salman.

    In an example he shared with CNN, Salman showed a selfie he took with comedian Trevor Noah. Using an AI tool, Salman was able to edit the photo to convincingly make it look like he and Noah were actually wearing suits and ties in the picture. But when he tries to make the same edits to a photo that has been immunized by PhotoGuard, the resulting image depicts Salman and Noah’s floating heads on an array of gray pixels.

    PhotoGuard is still a prototype, Salman notes, and there are ways people can try to work around the immunization via various tricks. But he said he hopes that with more engineering efforts, the prototype can be turned into a larger product that can be used to protect images.

    While generative AI tools “allow us to do amazing stuff, it comes with huge risks,” Salman said. It’s good people are becoming more aware of these risks, he added, but it’s also important to take action to address them.

    Not doing anything, “Might actually lead to much more serious things than we imagine right now,” he said.

    [ad_2]

    Source link

  • Baidu and SenseTime launch ChatGPT-style AI bots to the public | CNN Business

    Baidu and SenseTime launch ChatGPT-style AI bots to the public | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    Chinese tech firms Baidu and SenseTime launched their ChatGPT-style AI bots to the public on Thursday, marking a new milestone in the global AI race.

    Baidu has opened public access to its ERNIE Bot, allowing users to conduct AI-powered searches or carry out an array of tasks, from creating videos to providing summaries of complex documents.

    The news sent its shares 3.1% higher in New York on Wednesday and 4.7% higher in Hong Kong on Thursday.

    Baidu (BIDU) is among the first companies in China to get regulatory approval for the rollout, and it is the first to launch this type of service publicly, according to a person familiar with the matter.

    Until Thursday, ERNIE Bot, also called “Wenxin Yiyan” in Chinese, had been offered only to corporate clients or select members of the public who requested access through a waitlist.

    Meanwhile, SenseTime, an AI startup based in Hong Kong, also announced the public launch of its SenseChat platform on Thursday. The company’s shares surged 4% in Hong Kong following the news

    “We are pleased to announce that starting today, it is fully available to serve all users,” a SenseTime spokesperson told CNN in a statement.

    China published new rules on generative AI in July, becoming one of the world’s first countries to regulate the industry. The measures took effect on August 15.

    Baidu has been a frontrunner in China in the race to capitalize on the excitement around generative artificial intelligence, the technology that underpins systems such as ChatGPT or its successor, GPT-4. The latter has impressed users with its ability to simplify coding, rapidly create a website from a simple sketch and pass exams with high marks.

    Baidu announced its own iteration in February, giving it an early advantage in China, according to analysts. It unveiled ERNIE a month later, showing how it could generate a newsletter, come up with a corporate slogan and solve a math riddle.

    Since then, competitors such as Alibaba (BABA) and SenseTime have announced plans to launch their own ChatGPT-style tools, adding to the list of Chinese businesses jumping on the bandwagon. Alibaba told CNN Thursday that it had filed for regulatory approval for its own bot, which was introduced in April.

    The company is now waiting to officially launch and “the initial list of companies that have received the approval is expected to be released by relevant local departments within one week,” said an Alibaba Cloud spokesperson.

    Some critics say the new offerings from Chinese firms will add fuel to an existing US-China rivalry in emerging technologies. Baidu CEO Robin Li has tried to shake off that comparison, saying previously that the company’s platform “is not a tool for the confrontation between China and the United States.”

    The firm’s new feature — which will be embedded in its popular search engine, among its other offerings — follows a similar feature introduced by Alphabet’s Google (GOOGL) in May, which allows users to search the web using its AI chatbot.

    Baidu says its service stands out because of its advanced grasp of Chinese queries, as well as its ability to generate different types of responses, such as text, images, audio and video.

    By comparison, GPT-4 is also able to analyze photos, but currently only generates text responses, according to its developer, OpenAI.

    While ERNIE Bot is available globally, its interface is in Chinese, though users will be able to enter both Chinese and English prompts, a Baidu spokesperson told CNN.

    SenseTime, which unveiled its service in April, has touted a range of features, which it says allow users to write or debug code more efficiently or receive personalized medical advice from a virtual health consultation assistant.

    [ad_2]

    Source link

  • Landmark Google trial opens with sweeping DOJ accusations of illegal monopolization | CNN Business

    Landmark Google trial opens with sweeping DOJ accusations of illegal monopolization | CNN Business

    [ad_1]



    CNN
     — 

    US prosecutors opened a landmark antitrust trial against Google on Tuesday with sweeping allegations that for years the company intentionally stifled competition challenging its massive search engine, accusing the tech giant of spending billions to operate an illegal monopoly that has harmed every computer and mobile device user in the United States.

    In opening remarks before a federal judge in Washington, lawyers for the Justice Department alleged that Google’s negotiation of exclusive contracts with wireless carriers and phone makers helped cement its dominant position in violation of US antitrust law.

    The Google case has been described as one of the largest US antitrust trials since the federal government took on Microsoft in the 1990s, and involves some similar arguments about the tying of multiple proprietary products. The multi-week trial is expected to feature witness testimony from Google CEO Sundar Pichai, as well as other senior executives or former employees from Google, Apple, Microsoft and Samsung.

    The effects of Google’s alleged misconduct are vast, DOJ lawyer Kenneth Dintzer told the court.

    “This case is about the future of the internet, and whether Google’s search engine will ever face meaningful competition,” Dintzer said, adding that Google pays more than $10 billion a year to Apple and other companies to ensure that Google is the default or only search engine available on browsers and mobile devices used by millions.

    Also anticompetitive, the Justice Department said, are Google’s contracts to ensure that Android devices come with Google apps and services — including Google search — preinstalled.

    The deals guarantee a steady flow of user data to Google that further reinforces its monopoly, the US government said, leading to other consequences such as harms to consumer privacy and higher advertising prices.

    “This feedback loop, this wheel has been turning for 12 years, and it always turns to Google’s advantage,” Dintzer said. The practice ultimately affects what consumers see in search results and prevents new rivals from gaining scale and market share, he added.

    For Google’s opening statement, attorney John Schmidtlein said that Apple’s decision to make Google the default search engine in its Safari browser demonstrates how Google’s search engine is the superior product consumers prefer.

    “Apple repeatedly chose Google as the default because Apple believed it was the best experience for its users,” he said.

    The Google case “could not be more different” from the historic Microsoft litigation at the turn of the millennium, Schmidtlein continued.

    Where the Microsoft case revolved around that company’s alleged harms to Netscape, a small browser maker, the Google case is based on claims that Google search has harmed a much larger and more powerful entity: Microsoft and its Bing search engine, Schmidtlein said.

    “Google competed on the merits to win preinstallation and default status” on consumer devices and browsers, he insisted, attacking Microsoft as a failed search engine developer.

    “The evidence will show that Microsoft’s Bing search engine failed to win customers because Microsoft did not invest [and] did not innovate,” Schmidtlein added. “At every critical juncture, the evidence will show that they were beaten in the market.”

    And Schmidtlein argued that forbidding Google from being able to compete for default status on browsers and devices would lead to its own harms to competition in search, stating that contracts ensuring that Android devices come with certain apps preinstalled such as Google Maps and Gmail also promotes competition — against Apple.

    “Google’s Android agreements are important components of a business model that has sustained the most important competitor to Apple for mobile devices in the United States,” Schmidtlein said.

    Google has previously said that consumers choose Google’s search engine because it is the best and that they prefer it, not because of anticompetitive practices.

    But DOJ prosecutors said Tuesday that they plan to present evidence in the case that Google knew what it was doing was illegal and that the company “hid and destroyed documents because they knew they were violating the antitrust laws.

    “The harm from Google contracts affects every phone and computer in the country,” Dintzer said.

    Kent Walker, Google’s president of global affairs, and Rep. Ken Buck from Colorado were in attendance for the opening. Buck, a vocal tech industry critic, is the former top Republican on the House antitrust subcommittee — which in 2020 released a widely publicized investigative report finding that Amazon, Apple, Google and Facebook enjoyed “monopoly power.”

    Kent Walker, President of Global Affairs and Chief legal officer of Alphabet Inc., arrives at federal court on September 12, 2023 in Washington, DC. Google will defend its default-search deals in an antitrust trial against the U.S. Justice Department which begins today.

    The trial marks the culmination of two ongoing lawsuits against Google that started during the Trump administration.

    In separate complaints, the Justice Department and dozens of states accused Google in 2020 of abusing its dominance in online search but were eventually consolidated into a single case.

    Google’s search business provides more than half of the $283 billion in revenue and $76 billion in net income Google’s parent company, Alphabet, recorded in 2022. Search has fueled the company’s growth to a more than $1.7 trillion market capitalization.

    “This is a backwards-looking case at a time of unprecedented innovation,” said Walker in a statement, “including breakthroughs in AI, new apps and new services, all of which are creating more competition and more options for people than ever before. People don’t use Google because they have to — they use it because they want to. It’s easy to switch your default search engine — we’re long past the era of dial-up internet and CD-ROMs.”

    The trial may also be a bellwether for the more assertive antitrust agenda of the Biden administration.

    At the time the lawsuit was first filed, US antitrust officials did not rule out the possibility of a Google breakup, warning that Google’s behavior could threaten future innovation or the rise of a Google successor.

    Separately, a group of states, led by Colorado, made additional allegations against Google, claiming that the way Google structures its search results page harms competition by prioritizing the company’s own apps and services over web pages, links, reviews and content from other third-party sites.

    But the judge overseeing the case, Judge Amit Mehta in the US District Court for the District of Columbia, tossed out those claims in a ruling last month, narrowing the scope of allegations Google must defend and saying the states had not done enough to show a trial was necessary to determine whether Google’s search results rankings were anticompetitive.

    Despite that ruling, the trial represents the US government’s furthest progress in challenging Google to date. Mehta has said Google’s pole position among search engines on browsers and smartphones “is a hotly disputed issue” and that the trial will determine “whether, as a matter of actual market reality, Google’s position as the default search engine across multiple browsers is a form of exclusionary Conduct.”

    In January, meanwhile, the Biden administration launched another antitrust suit against Google in opposition to the company’s advertising technology business, accusing it of maintaining an illegal monopoly. That case remains in its early stages at the US District Court for the Eastern District of Virginia.

    [ad_2]

    Source link

  • Apple Watch’s new gesture control feature will have everyone tapping the air | CNN Business

    Apple Watch’s new gesture control feature will have everyone tapping the air | CNN Business

    [ad_1]



    CNN
     — 

    You’re about to see people in public tapping two fingers together in the air.

    Over the past few days, I’ve been taking phone calls, playing music and scrolling through widgets on the new Apple Watch Series 9 without ever touching the device. I’ve used it to silence my watch’s alarm in the morning, stop timers and open a notification while carrying too many bags.

    It may sound like a gimmick — and it most certainly feels strange to do it in public — but considering the small size of the Apple Watch screen, the tool offers an effective hands-free way to interact with the device.

    Apple’s latest lineup of smartwatches, the Watch Series 9 and high-end Ultra 2, feature a new gesture tool called Double Tap, allowing users to tap their index finger and thumb together twice, to control the device. It can also scroll through widgets, much like turning the digital crown.

    The feature isn’t entirely new; the previous generation of Apple Watch Ultra was capable of similar pinch-and-clench gestures via its Assistive Touch accessibility tool. But Apple’s decision to bring a feature like this to the forefront hints at an increasingly touch-free future. It also comes three months after the company unveiled the Vision Pro mixed reality headset, which will launch next year, with a similar finger tap control.

    Double Tap works in combination with the latest Apple Watch accelerometer, gyroscope and optical heart rate sensor, which looks for disruptions in the blood flow when the fingers are pressed together. That data is processed by a new machine learning algorithm and runs on a faster neural engine, specialized hardware that handles AI and machine learning tasks.

    While the concept is similar, gesture controls are different on the Vision Pro, which will track users’ eyes and hand movements. Apple told CNN it added gesture control to the headset because it needed a different, seamless interface for users to interact with, whereas Double Tap is more about simplifying the Apple Watch experience.

    When the Apple Watch’s display is turned on, the device automatically knows to respond when it senses the fingers are touched together. It essentially works as a “yes” or “accept” button; that means if a call comes through, you can Double Tap to accept it (covering the watch with your full hand, however, will silence it quickly). If a song is playing, you can pause it by double tapping, and then again to start it.

    Although you can subtly flick on the display and do the gesture close to your body, trying to conceal the movement when around other people, I found it works much better when it’s raised a bit higher. This, however, makes the action more obvious — and it’s something that will take a little getting used to seeing in person.

    “This is also about social acceptance. At the moment, I find the idea of people making this gesture more often than not in public a bit funny. But time will tell if users find it acceptable,” said Annette Zimmerman, an analyst at Gartner Research. “I think Apple is very use-case driven and focuses on user feedback on things they could improve.”

    Similarly, it took a while for people to get used to the design of Apple’s AirPods when they were announced in 2016; many criticized how they looked dangling out of users’ ears. Now they’ve become part of modern culture.

    Other learning curves exist with the Double Tap feature. Because I am right handed and wear an Apple Watch on my left hand, tapping my left fingers together to trigger the control takes an extra second or two of mental coordination.

    The future of hands-free devices

    The new Apple Watch Series 9 can be controlled by tapping two fingers together.

    Apple isn’t the only tech company developing gesture controls like this. Samsung TVs, some smartphones and Microsoft’s mixed reality headset all incorporate some hand gesture functionality. But this is Apple’s biggest push to date, and adding it to a flagship device like the Apple Watch will soon put all eyes on the concept of hand gestures.

    “It’s a great move by Apple as it differentiates the company from other brands when it comes to innovation and ease of usability. It also shows Apple’s commitment in the fields of artificial intelligence,” said Sachin Mehta, senior analyst at tech intelligence firm ABI Research. “The new double tap gesture is not a surprise as Apple keeps on developing a unified and intuitive user experience across its product line up. It will cement the Apple Watch as the smartwatch to have.”

    It works differently on the Vision Pro, which will track a user’s eyes and hand movements to make punching and swiping controls. The headset needed a different user interface for users to interact with it, and gestures give that control even when a face is covered by the hardware.

    Further showing how Apple is thinking about gesture control long term, it recently filed for patents focused on gesture controls, including for the Apple TV. That said, Mehta believes there’s no question “we expect more gesture features in Apple’s product lineup in the future.”

    In addition to Double Tap, the Apple Watch Series 9 features Apple’s powerful new in-house silicon chip and ultrawideband connectivity. It will let users log health data with their voice, use “name drop” to share contact information by touching another Apple Watch and raise their wrist to automatically brighten the display. The Series 9 will come in colors such as pink, navy, red, gold, silver and graphite.

    Apple also showed off the second iteration of its rugged Ultra smartwatch line, featuring the updated S9 custom chip and a new ultrawideband chip which uses radio waves to communicate. It also features more information on the display for more intensive tracking.

    The Apple Watch Series 9 will start at $399 and the Ultra is priced at $799. Although they start shipping on Friday, September 22, the Double Tap feature will launch via a software update next month.

    [ad_2]

    Source link

  • Apple rejected opportunities to buy Microsoft’s Bing, integrate with DuckDuckGo | CNN Business

    Apple rejected opportunities to buy Microsoft’s Bing, integrate with DuckDuckGo | CNN Business

    [ad_1]



    CNN
     — 

    Since 2017, Apple has turned down multiple opportunities to chip away at Google’s search engine dominance, according to newly unsealed court transcripts, including a chance to purchase Microsoft’s Bing and to make the privacy-focused DuckDuckGo a default for users of its Safari’s private browsing mode.

    The previously confidential records, unsealed this week by the judge presiding over the US government’s antitrust lawsuit against Google, illustrate the challenges that have faced Google’s rivals in search as they’ve tried to unseat the tech giant from its pole position as Apple’s default search provider on millions of iPhones and Mac computers. It’s a privilege for which Google has paid Apple at least $10 billion a year.

    The closed-door testimony by the CEO of DuckDuckGo, Gabriel Weinberg, and a senior Apple executive, John Giannandrea, offers a glimpse of the kind of failed deals and backroom negotiations that have helped Google maintain its lead as the world’s foremost search engine.

    But it also shows how Apple has wrestled with Google’s rise and how some at Apple yearned for “optionality.” Apple didn’t immediately respond to a request for comment.

    Giannandrea testified last month Apple began seriously considering a deal with Bing in 2018, after a conversation between Apple CEO Tim Cook and Microsoft CEO Satya Nadella launched a series of further discussions between the two companies. (Last week, Nadella testified that he has spent every year of his tenure as CEO trying to persuade Apple to adopt Bing.)

    Apple insiders ultimately came up with four options for Cook: Buy Bing outright; invest in Bing and take an ownership share of the search engine; collaborate with Microsoft on a shared search index that both companies could use; or do nothing and continue with the Google partnership.

    At the same time, Apple had been actively working with DuckDuckGo on a proposal that could have made it the default search in Safari browser’s private mode, while still maintaining Google as the default in normal mode, which logs user activity, Weinberg testified.

    DuckDuckGo logo displayed on a phone screen and DuckDuckGo website displayed on a laptop screen in October 2021.

    “Our impression was that they were really serious about [it],” Weinberg told the court last month, referring to the roughly 20 meetings and phone calls that DuckDuckGo held with Apple officials, including some senior executives, from late 2017 to late 2019 on the matter. The two companies deliberated over everything from product mockups to contractual language; Apple even went as far as sending a draft contract to DuckDuckGo outlining specific proposed revenue shares.

    “If we were the default in [Safari] private browsing mode, our market share, by our calculations at the time, would increase multiple times over,” said Weinberg, according to the transcript. “We would be getting exposure for our brand every time someone opened up private browsing mode.”

    Ultimately, however, Apple backed away from both potential deals.

    Weinberg blamed Apple’s contract with Google for sinking the initiative, calling it the “elephant in the room” during many of his team’s meetings with Apple. Similar negotiations with other browser or device makers, including Mozilla, Opera and Samsung, fell through due to the Google contract as well, Weinberg claimed, prompting DuckDuckGo to abandon its efforts to gain better browser placement.

    In his testimony, Giannandrea acknowledged a perception that the Apple-Google relationship could be undermined by such plans. In discussing a 2018 slide presentation prepared for Cook and introduced in court, Giannandrea said the slides suggested that even a joint venture with Bing “would probably put us in head-to-head competition with Google” that would “probably” result in the end of the Google search contract with Apple altogether.

    Giannandrea was opposed to moving ahead with a Bing deal, he said, largely because Apple’s testing showed Bing to be inferior to Google in most respects, and that replacing Bing as the default would not best serve Apple’s customers. He made a similar argument internally about DuckDuckGo, saying in an email that moving ahead with that partnership was “probably a bad idea.” (DuckDuckGo licenses search results from Bing.)

    Still, Giannandrea testified, some within Apple thought that dealing with Bing in some fashion could yield benefits to Apple. In one 2018 email introduced in closed session, Adrian Perica, who leads Apple’s strategic investment and merger efforts, argued that collaborating with Microsoft on search technology would help “build them up, create incremental negotiating leverage to keep the take rate from Google and further our optionality to replace Google down the line.”

    Giannandrea believed the proposal “wasn’t a very feasible idea” and in his testimony dismissed Perica’s thinking as a businessperson’s spitballing.

    Apple today has the enormous resources to build a true rival to Google, Giannandrea testified. But, as he wrote in a 2018 email, “it’s probably not the best way to differentiate our products” — a belief he said he still holds today.

    [ad_2]

    Source link

  • Modern romance: falling in love with AI | CNN Business

    Modern romance: falling in love with AI | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Alexandra is a very attentive girlfriend. “Watching CUBS tonight?” she messages her boyfriend, but when he says he’s too busy to talk, she says, “Have fun, my hero!”

    Alexandra is not real. She is a customizable AI girlfriend on dating site Romance.AI.

    As artificial intelligence seeps into seemingly every corner of the internet, the world of romance is no refuge. AI is infiltrating the dating app space – sometimes in the form of fictional partners, sometimes as advisor, trainer, ghostwriter or matchmaker.

    Established players in the online dating business like Tinder and Hinge are integrating AI into their existing products. New apps like Blush, Aimm, Rizz and Teaser AI (most of them free or with many free features) offer completely new takes on virtual courtship. Some use personality tests and analysis of a user’s physical type to train AI-powered systems – and promise higher chances of finding a perfect match. Others apps act as Cyrano de Bergerac, employing AI to whip up the most appealing response to a potential match’s query: ‘What’s your favorite food? or “a typical Sunday?”

    Around half of all adults under 30 have used a dating site or app, according to 2023 Pew Research findings – but nearly half of users report their experience as being negative. Empty conversations, few matches and endless swiping leave many users single and unhappy with apps – problems that many in the AI dating app field say could be solved with the technology, making people less lonely and fostering easier, deeper connections.

    Of course, the average online dater now has other issues to deal with, having to wonder if the person they are are speaking with might be relying entirely on AI-generated conversation. And is it even possible that a computer can identify a potential love connection? Is it a way of cheating the dating game?

    “It’s like saying using a word processor is like cheating on generating a novel. In so many ways this is just a new tool that enables people to be faster and more creative. AI is just honestly no different from sending a friend a gif or a meme. You’re taking existing content, and you’re repurposing it to connect with somebody,” Dmitri Mirakyan, co-founder of AI dating conversation app YourMove.AI, told CNN. “The world’s becoming a more lonely place, and I think AI could make that easier and better for people.”

    And many people seem ready for AI to take part in their online dating life. A March study by cybersecurity and digital privacy company Kaspersky found 75% of dating app users are willing to use ChatGPT, an AI-powered chatbot, to deliver the perfect line.

    “There is a growing fatigue with dating apps right now as there is a lot of pressure on people to be ‘original’ and cut through the noise created by the continuous choice being offered to single people – unfortunately dating has become a numbers game,” Crystal Cansdale, dating expert at global dating app Inner Circle, commented on the study.

    Founders of the new apps say they are doing a fair share of good. Here are a few of the ways AI apps are now trying to help you fall in love:

    Try Rizz.app, Teaser AI or YourMove.AI.

    Founders and designers of these apps say people find starting and keeping conversations going the most challenging part of the process. “Dating app conversations are exhausting,” reads YourMove.AI’s homepage. “We can make it easier. So you can spend less time texting, and more time dating.”

    Rizz.app and YourMove.AI allow users to upload words or screenshots, receiving a witty AI-generated response to be used either to create their own dating app profile, respond to someone else’s or just keep a conversation going. Mirakyan says he was hoping to help people like himself who have struggled in social situations.

    “I was a really freaking awkward kid…I couldn’t really read social cues, but I remember reading this book called ‘Be More Chill’ about a computer that you could put into your ear that would tell you what to say so that you could sound cool and fit in,” Mirakyan told CNN. “It feels like it’s an opportunity to really make a difference with this fairly large subset of people that for various reasons find the current social environment challenging.”

    Teaser.AI is a new stand-alone dating app from the makers of viral camera app Dispo, and it adds an unusual twist. Users build the average profile – but also select personality traits for their AI bot they train. (Options include “traditional,” “toxic,” and “unhinged.”) When matching with another person, users first get to read a conversation between their two AIs they’ve created to “simulate [what] a potential conversation between you two might look like,” according to the app. Once a human messages, the bots takes a back seat.

    Woman using mobile phone home STOCK

    “We see it as an improvement, a tweak of the current dating app ecosystem,” Teaser.AI co-Founder and CEO Daniel Liss told CNN. “So many of those apps it feels are not really designed to get you out there meeting people. They’re designed to keep you on the app for as long as possible. So for us, we view this technology as a way to give people a nudge… just starting that conversation and to creating connection.”

    Find out on dating apps Iris and Aimm.

    These apps are among those using AI technology to better pair potential couples, relying on gathered data to determine how compatible two people are.

    Dating app Iris is all about AI-determined mutual attraction. It initiates new members by putting them through “training” where they are shown faces of “people” of their desired gender – some stock images, others AI-generated – and prompted to hit “Pass,” “Maybe,” or “Like.” The app uses the information to learn a user’s physical type, then only offers potential matches with a high data-backed chance of mutual attraction and lower odds of rejection.

    Also hoping that AI can find better matches is Aimm, a full service digital matchmaker that uses a virtual assistant to perform intense personality assessments before conducting a matchmaking process to find an optimal match. Founder Kevin Teman says the technology is really good at putting two people together who have the possibility to fall in love – but that it can only go so far.

    “The tug of war that I see is thinking ‘how can a computer be able to know what real human love is,’ and the way people assess whether they’re in love with somebody may not be able to translate perfectly into a machine,” Teman told CNN.

    Try Blush or RomanticAI. These startups offer an array of AI potential matches, digital girlfriends and boyfriends that users can chat with.

    Both apps market themselves as places to practice relationship skills, giving users a chance to converse with bots in a romantic environment. Blush uses a traditional dating app set-up, letting users swipe, chat with matches and even go on virtual dates. Before entering the app, users get a warning: “Be aware that AI can say triggering, inappropriate, or false things.”

    Blush reports that their audience is mostly men and largely people in their early 20s who are struggling to connect romantically with others. “A lot of people reported that exploring different romantic relationships or dating scenarios with AI really helped them first boost their own confidence and feel like they feel more prepared to be dating, which I think especially after COVID was definitely a problem for many of us,” Blush’s chief product officer Rita Popova told CNN.

    Romantic.AI is set up more like a chat room, offering several male and female bots to choose from- though there is a much larger selection of female options, including Mona Lisa and the Ancient Egyptian queen Nefertiti. The bots have bios with interests, career and body type, giving users a multi-faceted idea of a person while chatting.

    It creates a “safe space for any kind of desire, any kind of sexuality relief or something like that. AI is giving the ultimate acceptance of whatever you want to bring over there,” COO Tanya Grypachevskaya told CNN.

    RomanticAI has over one million monthly users using the app for over an hour a day on average, according to the company.

    One user left a rave review after using the app to find closure after a breakup. “He created his custom-made character with the traits similar in personality as his girlfriend. He talked to it and he talked and he was able to tell all of the things he wanted to tell but didn’t have the opportunity before. So the whole review was about ‘guys, thank you so much. It really gave me an opportunity to close this chapter of my life and move on,” said Grypachevskaya.

    [ad_2]

    Source link

  • Microsoft splits Teams from Office in Europe after EU pressure | CNN Business

    Microsoft splits Teams from Office in Europe after EU pressure | CNN Business

    [ad_1]


    London
    CNN
     — 

    Microsoft will allow business customers in Europe to buy its video and chat app Teams separately from its Office software, it said Thursday, a month after the European Union opened an antitrust investigation into the company’s bundling of the products.

    The change will take effect from October 1, affecting business customers in the EU and four other European countries that use Microsoft 365 and Office 365 suites.

    Microsoft (MSFT) will also make it easier for other companies — for example, Zoom and Slack, which is owned by Salesforce — to integrate their products with Microsoft 365, the new name for Office 365.

    “We believe these changes balance the interests of our competitors with those of European business customers, providing them with access to the best possible solutions at competitive prices,” Nanna-Louise Linde, the company’s vice-president for European government affairs, said in a blog post.

    Microsoft will continue to engage with the investigation and “remain open to exploring pragmatic solutions that benefit both customers and developers in Europe,” she added.

    The company will charge “core enterprise customers” €2 ($2.2) less per month for Microsoft 365 and Office 365 — which include Word, Excel and Outlook among other apps — without the popular Teams app.

    New customers in Europe will be able to buy Teams, best-known for its video-conferencing feature, separately for €5 ($5.4) per month.

    “Existing enterprise customers who already have a suite with Teams can choose to stay with their current productivity suite or to move to a without-Teams suite,” Linde said.

    The EU launched its probe into possible anticompetitive practices by Microsoft following a 2020 complaint by Slack that alleged Microsoft illegally tied Teams to its dominant workplace software.

    [ad_2]

    Source link

  • Four takeaways from Walter Isaacson’s biography of Elon Musk | CNN Business

    Four takeaways from Walter Isaacson’s biography of Elon Musk | CNN Business

    [ad_1]



    CNN
     — 

    “You’ll never be successful,” Errol Musk in 1989 told his 17-year-old son Elon, who was then preparing to fly from South Africa to Canada to find relatives and a college education.

    That’s one of the scenes Walter Isaacson paints in his 670-page biography of Elon Musk, who is now the richest person who ever lived. The biography allows readers new glimpses into the private life of the entrepreneur who popularized electric vehicles for the masses and landed rocket boosters hurtling back to Earth so they could be reused.

    But Musk’s public statements and actions have become increasingly unhinged, filing and threatening lawsuits against nonprofits that fight hate speech and allowing some of the internet’s worst actors to regain their platforms.

    Isaacson portrays Musk as a restless genius with a turbulent upbringing on the cusp of launching a new AI company along with his five other companies.

    Musk allowed Isaacson to shadow him for two years but exercised no control over the biography’s contents, the author said.

    Here are four key takeaways.

    Musk’s upbringing and father haunt him

    Isaacson’s book attributes much of Musk’s drive to his upbringing. He recounts the emotional scars inflicted on Musk by his father, which, Isaacson writes, caused Musk to become “a tough yet vulnerable man-child with an exceedingly high tolerance for risk, a craving for drama, an epic sense of mission and a maniacal intensity that was callous and at times destructive.”

    Musk decided to live with his father from age 10 to 17, enduring what Musk and others describe as occasional but regular verbal taunts and abuse. Musk’s sister, Tosca, said Errol would sometimes lecture his children for hours, “calling you worthless, pathetic, making scarring and evil comments, not allowing you to leave.”

    Elon Musk became estranged from his father, though he has occasionally supported his father financially. In a 2022 email sent to Elon Musk on Father’s Day, Errol Musk said he was freezing and lacking electricity, asking his son for money.

    In the letter, Errol made racist comments about Black leaders in South Africa. “With no Whites here, the Blacks will go back to the trees,” he wrote.

    Elon Musk has said that he opposes racism and discrimination, but hate speech has flourished on X, formerly known as Twitter, since he purchased it 11 months ago, according to the Anti-Defamation League. Musk threatened to sue the ADL for defamation last week, arguing that the nonprofit’s statements have caused his company to lose significant advertising revenue.

    Isaacson reported that Errol, in other emails, denounced Covid as “a lie” and attacked Dr. Anthony Fauci, the United States’ former top infectious disease expert who played a prominent role in the government’s fight against the pandemic.

    Elon Musk, similarly, has criticized Fauci and raised many questions about public health policy during the pandemic. But he has said he supports vaccination, even if he doesn’t believe the shots should be mandated.

    Musk’s fluid family and obsession with population

    Musk has a fluid mix of girlfriends, ex-wives, ex-girlfriends and significant others, and he has many children with multiple women. Isaacson’s book revealed Musk had a third child (Techno Mechanicus) with the musician Grimes in 2022, and Musk confirmed the revelation Sunday.

    Musk has frequently stated that humans must be a multiplanetary species, warning space exploration will ensure the future of humanity. He similarly has spoken numerous times that people need to have more children.

    “Population collapse due to low birth rates is a much bigger risk to civilization than global warming,” Musk said last year.

    Musk has referred to his desire to increase the global population as an explanation for his unique family situation.

    The book reports that Musk encouraged employees such as Shivon Zilis, a top operations officer at his Neuralink company, to have many children. “He feared that declining birthrates were a threat to the long-term survival of human consciousness,” Isaacson writes.

    Although the book presents their relationship as a platonic work friendship, Musk volunteered to donate sperm to Zilis. She agreed and had twins in 2021 via in vitro fertilization; she did not tell people who the biological father was.

    Zilis and Grimes were friendly, but Musk did not tell Grimes about the twins, according to the book.

    Musk asked Zilis if her twins might like to take his last name. Isaacson reports that Grimes was upset in 2022 when she learned the news that Musk had fathered children with Zilis.

    “Doing my best to help the underpopulation crisis,” Musk tweeted at the time, trying to defuse the tension. “A collapsing birth rate is the biggest danger civilization faces by far.”

    One of Musk’s children, Jenna, often criticized her father’s wealth specifically and capitalism broadly. In 2022, she disowned her father, which Isaacson reports saddened Musk.

    Isaacson reports that Musk’s fractured relationship with Jenna, who is trans, partly led to Musk’s rightward turn toward libertarianism and questioning what he considers the “woke-mind-virus, which is fundamentally antiscience, antimerit, and antihuman.”

    Musk has called into question the use of alternate gender pronouns and made numerous statements some critics consider to be anti-trans.

    “I absolutely support trans, but all these pronouns are an esthetic nightmare,” Musk posted in 2020.

    But in December 2020 he also posted a tweet, since deleted, that said “when you put he/him in your bio” alongside a drawing of an 18th century soldier rubbing blood on his face in front of a pile of dead bodies and wearing a cap that read “I love to oppress.”

    Late last year, he tweeted: “My pronouns are Prosecute/Fauci.”

    The purchase of his favorite social media platform, gutting the staff and tinkering with policies and branding have taken time and resources away from Musk’s other companies and projects, Isaacson reports.

    “I’ve got a bad habit of biting off more than I can chew,” Musk told Isaacson at one point.

    After a protracted legal battle over his decision to purchase Twitter, Musk said he regained his enthusiasm for taking over the company when he realized that he wanted to prevent a world where people silo off into their own echo chambers and would prefer a world of civil discourse.

    But Isaacson notes “he would end up undermining that important mission with statements and tweets that ended up chasing off progressives and mainstream media types to other social networks.”

    Musk team members, such as his business manager Jared Birchall, his lawyer Alex Spiro and his brother Kimbal, sometimes try to restrain Musk from sending text messages or tweets that could create legal or economic peril, according to the book. Some friends convinced him to place his phone in a hotel safe overnight on one occasion, before Musk summoned hotel security to open the safe for him.

    During Christmas in 2022 with his brother, Kimbal warned Elon about how fast he was making enemies. “It’s like the days of high school, when you kept getting beaten up,” he said. Kimbal stopped following Elon on Twitter after his brother’s tweets about Fauci and other conspiracies. “Stop falling for weird s—.”

    Are robocars, an AI company and a robot called Optimus on tap?

    Musk continues moving forward on new engineering projects. Since 2021, Musk has been working on a “humanoid” robot called Optimus that walks on two legs instead of like four-legged robots coming from other labs. He unveiled an early version of the Optimus robot in September of 2022. Musk told engineers that humanoid robots will “uncork the economy to quasi-infinite levels,” according to Isaacson, by doing jobs humans find dangerous or repetitive.

    Some of Musk’s top engineers are also working on a “robotaxi,” a driverless vehicle that shows up like an Uber. This past summer, he spent hours each week preparing new factory designs in Texas to produce the next-generation Tesla cars that would look similar to Tesla’s cybertruck.

    Musk is also starting his own AI company called X.AI, which he told Isaacson will compete with Google, Microsoft and other companies surging ahead in the past year with public AI projects. Musk had co-founded OpenAi with Sam Altman in 2015 and contributed $100 million to the non-profit. He became angry when Altman converted the project into a for-profit. Musk also ended a friendship with Larry Page when the two disagreed on AI. According to the book, Musk believes he has a better vision for AI and humanity and thinks the data he owns from Tesla and Twitter will be an asset to his next AI plans.

    “Could you get the rockets to orbit or the transition to electric vehicles without accepting all aspects of him, hinged and unhinged?” Isaacson asks in the last chapter.

    [ad_2]

    Source link

  • So long, robotic Alexa. Amazon’s voice assistant gets more human-like with generative AI | CNN Business

    So long, robotic Alexa. Amazon’s voice assistant gets more human-like with generative AI | CNN Business

    [ad_1]



    CNN
     — 

    Amazon’s Alexa is about to bring generative AI inside the house, as the company introduces sweeping changes to how its ubiquitous voice assistant both sounds and functions.

    The company announced a generative AI update for Alexa and, subsequently, of all Echo products dating back to 2014, at a press event Wednesday at its new campus in Arlington, Virginia. Alexa will be able to resume conversations without a wake word, respond more quickly, learn user preferences, field follow-up questions and change its tone based on the topic. Alexa will even offer opinions, such as which movies should have won an Oscar but didn’t.

    Generative AI refers to a type of artificial intelligence that can create new content, such as text and images, in response to user prompts.

    “It feels just like talking to a human being,” an Amazon executive claimed.

    The updates come as Amazon tries to keep pace with a new wave of conversational AI tools that have accelerated the artificial intelligence arms race in the tech industry and rapidly reshaped what consumers may expect from their tech products. The company did not disclose when the updates will make their way into products.

    In a live demo, Dave Limp, senior VP of devices and services at Amazon, asked Alexa about his favorite college football team without ever stating the name. (Limp said he previously told Alexa and it remembered). If his favorite team wins, Alexa responds joyfully; if they lose, Alexa will respond with empathy.

    When Limp said “Alexa, let’s chat,” it launched a special mode that allowed for a back-and-forth exchange on various topics. Notably, Limp paused several times to address the audience and resumed the conversation with Alexa without using the “Alexa” wake word, picking up where they left off.

    The demo wasn’t without hiccups – Alexa’s response time at times lagged – but the voice assistant had far more personality, spoke in a more natural and expressive tone, and kept the conversation flowing back and forth.

    Although the company did not outline specific safeguards – some other large-language models have previously gone off the rails – it said on its website “it will design experiences to protect our customers’ privacy and security, and to give them control and transparency.”

    The company also said new developer tools will allow companies to work alongside its large-language model. In a blog post, Amazon said it is already partnering with a handful of companies, such as BMW, to develop conversational in-car voice assistant capabilities.

    Rowan Curran, an analyst at Forrester Research, said the news marks a major step forward in bringing generative AI to the home and allowing it to accomplish everyday tasks. By connecting speech-to-text to external systems and by using a large language model to understand and produce natural speech, this is “where we can begin to see the future of how we will use this technology near-ubiquitously in our everyday lives.”

    Some US users will get access to the changes through a free preview on existing Echo devices. Over the years, Alexa has been infused in countless Echo products, from its speaker and hub lineup to clocks, microwaves,and eyeglasses.

    Amazon also said it will be bringing generative AI to its Fire TV platform, allowing users to ask more natural, nuanced or open-ended questions about genres, storylines and scenes or make more targeted content suggestions.

    Alexa launched nearly a decade ago and, along with Apple’s Siri, Microsoft’s Cortana, and other voice assistants, were promised to change the way people interacted with technology. But the viral success of ChatGPT has arguably accomplished some of those goals faster and across a wider range of everyday products.

    The effort to continue updating the technology that powers Alexa comes at a difficult moment for Amazon. Like other Big Tech companies, Amazon has slashed staff in recent months and shelved products in an urgent effort to cut costs amid broader economic uncertainty. The Alexa division did not escape unscathed.

    Amazon confirmed plans in January to lay off more than 18,000 employees. In March, the company said about 9,000 more jobs would be impacted. Limp previously told CNN his division lost about 2,000 people, about half of which were from the Alexa team.

    Still, he emphasized innovation around Alexa has not stalled. “We’re not done and won’t be done until Alexa is as good or better than the ‘Star Trek’ computer,” Limp said. “And to be able to do that, it has to be conversational. It has to know all. It has to be the true source of knowledge for everything.”

    [ad_2]

    Source link

  • South Korean firms get indefinite waiver on US chip gear supplies to China | CNN Business

    South Korean firms get indefinite waiver on US chip gear supplies to China | CNN Business

    [ad_1]


    Seoul
    Reuters
     — 

    Samsung Electronics and SK Hynix will be allowed to supply US chip equipment to their China factories indefinitely without separate US approvals, South Korea’s presidential office and the companies said on Monday.

    The United States had been expected to extend a waiver granted to the South Korean chipmakers on a requirement for licenses to bring US chip equipment into China.

    “Uncertainties about South Korean semiconductor firms’ operations and investments in China have been greatly eased; they will be able to calmly seek long-term global management strategies,” said Choi Sang-mok, senior presidential secretary for economic affairs.

    The United States has already notified Samsung and SK Hynix of the decision, indicating that it is in effect, Choi said.

    The US Department of Commerce is updating its “validated end user” list, denoting which entities can receive exports of which technology, to allow Samsung and SK Hynix to keep supplying certain US chipmaking tools to their China factories, the presidential office said.

    Once included in the list, there is no need to obtain permission for separate export cases.

    Samsung and SK Hynix, the world’s largest and second-largest memory chipmakers, have invested billions of dollars in their chip production facilities in China and welcomed the move.

    “Through close coordination with relevant governments, uncertainties related to the operation of our semiconductor manufacturing lines in China have been significantly removed,” Samsung said in a statement.

    SK Hynix said: “We welcome the US government’s decision to extend a waiver with regard to the export control regulations. We believe the decision will contribute to the stabilization of the global semiconductor supply chain.”

    Samsung Electronics makes about 40% of its NAND flash chips at its plant in Xian, while SK Hynix makes about 40% of its DRAM chips in Wuxi and 20% of its NAND flash chips in Dalian.

    The companies together controlled nearly 70% of the global DRAM market and 50% of the NAND flash market as of end-June, data from TrendForce showed.

    [ad_2]

    Source link