ReportWire

Tag: iab-computing

  • Elon Musk’s X Corp. sues California AG over content moderation law | CNN Business

    Elon Musk’s X Corp. sues California AG over content moderation law | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Elon Musk’s X Corp., the parent company of the platform formerly known as Twitter, on Friday sued California’s attorney general over the state’s new content moderation law.

    California Gov. Gavin Newsom signed bill AB 587 into law last September. The law requires social media companies to post their terms of service online and submit a semiannual report to the state attorney general outlining their content moderation policies and practices. Platforms must, among other things, disclose how their automated content moderation systems work, how they define controversial content categories such as “hate speech” and “disinformation,” and the number of pieces of content flagged or removed in such categories.

    Newsom’s office touted the bill as a way to improve transparency from social networks. But in a complaint filed in California’s Eastern District Court against California Attorney General Robert Bonta, X alleged that the law violates the First Amendment and California’s constitution by potentially compelling the company to moderate users’ politically charged speech.

    The law “compels companies like X Corp. to engage in speech against their will, impermissibly interferes with the constitutionally-protected editorial judgments of companies such as X Corp., has both the purpose and likely effect of pressuring companies such as X Corp. to remove, demonetize, or deprioritize constitutionally-protected speech,” the company alleged in the complaint. It added that the law could place an “undue burden” on social media companies such as Musk’s X, which is headquartered in California.

    Attorney General Bonta’s press office said in an email to CNN: “While we have not yet been served with the complaint, we will review it and respond in court.”

    A spokesperson for Newsom sent CNN a statement from last September in which the governor remarked on the bill.

    “California will not stand by as social media is weaponized to spread hate and disinformation that threaten our communities and foundational values as a country,” Newsom said in the statement. “Californians deserve to know how these platforms are impacting our public discourse, and this action brings much-needed transparency and accountability to the policies that shape the social media content we consume every day.”

    The lawsuit comes as Musk has escalated his rhetoric over what kinds of speech should be permitted on his platform, as the company’s core advertising business has taken a major revenue hit over concerns, among other things, about the approach to content moderation. Under Musk’s leadership, the platform has made several changes to its content policies, including ceasing enforcement of its Covid-19 misinformation policy and reinstating many previously banned users.

    Just last month, at least two brands paused their ad spending on X after their advertisements ran alongside an account promoting Nazism. (X suspended the account after the issue was flagged and said ad impressions on the page were minimal.)

    The billionaire this week threatened a lawsuit against the Anti-Defamation League for defamation, claiming that the nonprofit organization’s statements about rising hate speech on the social media platform have torpedoed X’s advertising revenue. (The ADL says it does not comment on legal threats, but CEO Jonathan Greenblatt spoke out against the #BanTheADL campaign on X.)

    In Friday’s lawsuit, X Corp. alleged that requiring social media companies to report their moderation practices could pressure the platforms into “limiting or censoring constitutionally-protected content that the State finds objectionable.” It also claimed that the law could force social platforms “to take public positions on controversial and politically charged issues” and thus tailor those positions in a way it otherwise wouldn’t to avoid public scrutiny.

    The law “‘compel[s]’ X Corp. to ‘speak a particular message,’ which necessarily ‘alters the content of’ its speech,’” in violation of its First Amendment rights, the company alleges in the complaint.

    The lawsuit seeks a jury trial on the constitutionality and legal validity of the California law.

    [ad_2]

    Source link

  • Fortnite players can now apply for a portion of its $245 million FTC settlement | CNN Business

    Fortnite players can now apply for a portion of its $245 million FTC settlement | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Millions of Fortnite users can now claim their small part of the $245 million that the game’s parent company agreed to pay as part of a settlement with the US Federal Trade Commission.

    Epic Games in December settled allegations with the FTC that it used deceptive tactics that drove users to make unwanted purchases in the multiplayer shooter game that became wildly popular with younger generations a few years ago. The FTC said Tuesday it has now opened the claims process for the more than 37 million potentially affected users who could qualify for compensation.

    Epic Games agreed in December to pay a total of $520 million to settle US government allegations that it misled millions of players, including children and teens, into making unintended purchases and that it violated a landmark federal children’s privacy law.

    In one settlement, Epic agreed to pay $275 million to the US government to resolve claims that it violated the Children’s Online Privacy Protection Act by gathering the personal information of kids under the age of 13 without first receiving their parents’ consent. In a second and separate settlement, Epic also agreed to pay $245 million as refunds to consumers who were allegedly harmed by user-interface design choices that the FTC claimed were deceptive.

    The FTC said in a statement Tuesday that the Fortnite maker “used dark patterns and other deceptive practices to trick players into making unwanted purchases” and also “made it easy for children to rack up charges without parental consent.”

    (“Dark patterns” refer to the gently coercive design tactics used by countless websites and apps that critics say are used to manipulate peoples’ digital behaviors.)

    The FTC is now notifying users who may be eligible to receive part of that $245 million settlement fund. Affected users may receive an email from the FTC over the next month with a claim number, or they can go directly to the settlement site and file a claim using their Epic account ID.

    Here’s who can apply: Users who were charged in-game currency for items they didn’t want between January 2017 and September 2022, parents whose children made charges to their credit cards on Fortnite between January 2017 and November 2018 or users whose accounts were locked sometime between January 2017 and September 2022 after they complained to their credit card company about wrongful charges. Claimants must be 18 years old; for younger users, their parents can submit a claim on their behalf.

    Users have until January 17, 2024, to submit a claim to be included in the settlement class. It is not yet clear how much the individual settlement payments will be.

    Epic’s agreement with the FTC also prohibits the company from using dark patterns or charging consumers without their consent, and forbids Epic from locking players out of their accounts in response to users’ chargeback requests with credit card companies disputing unwanted charges.

    Epic said in a blog post in December when it reached the agreement that, “no developer creates a game with the intention of ending up here.” It added, “We accepted this agreement because we want Epic to be at the forefront of consumer protection and provide the best experience for our players.”

    [ad_2]

    Source link

  • US regulator seeks court order to compel Elon Musk to testify about his Twitter acquisition | CNN Business

    US regulator seeks court order to compel Elon Musk to testify about his Twitter acquisition | CNN Business

    [ad_1]


    New York
    CNN
     — 

    The US Securities and Exchange Commission on Thursday applied for a court order to force Elon Musk to testify in an ongoing probe related to his acquisition of Twitter and public disclosures he made in connection with the deal, according to court filings.

    The filing Thursday in San Francisco federal court seeks a judge’s order requiring Musk to testify, alleging “blatant refusal to comply” with an earlier SEC subpoena.

    X, the company formerly known as Twitter, did not immediately respond to a request for comment.

    The SEC action is the latest turn in a long-running inquiry into whether Musk fully complied with his disclosure obligations when he began acquiring large amounts of Twitter stock, prior to his deal to buy the company. And it underscores years of friction between Musk and the agency over his public comments on numerous matters involving his companies.

    Musk began buying up large amounts of Twitter stock in early 2022, and he revealed on April 4 of that year that he had become the company’s largest shareholder. Later that month, Musk inked a deal to buy the platform for $44 billion and — after a monthslong legal battle attempting to exit the deal — officially closed the acquisition in October of last year. Musk has faced a number of legal challenges related to his Twitter acquisition in the months since his takeover.

    Musk testified twice as part of the SEC’s investigation in July 2022, according to the agency.

    Starting that same month, Musk produced “hundreds of documents” to federal investigators working on the probe, “including documents Musk authored,” according to a declaration by an SEC attorney filed alongside the agency’s court request.

    The SEC served Musk with a subpoena to testify again in the matter in May 2023, according to the court filing. The current subpoena at issue seeks evidence and testimony from Musk that the SEC does not yet possess, the agency said.

    Despite previously agreeing to testify on September 15 and rescheduling the testimony once, Musk “abruptly notified the SEC” two days before his scheduled appearance to say he would not be showing up, the filing states.

    The SEC attempted to negotiate with Musk to find alternative dates later this fall, according to court documents.

    “These good faith efforts were met with Musk’s blanket refusal to appear for testimony,” it adds.

    “The subpoena with which Musk failed to comply relates to an ongoing nonpublic investigation by the SEC,” the filing continued, “regarding whether, among other things, Musk violated various provisions of the federal securities laws in connection with (1) his 2022 purchases of Twitter, Inc (“Twitter”) stock, and (2) his 2022 statements and SEC filings relating to Twitter.”

    When Musk informed the SEC he would not be appearing to testify, his lawyer, Alex Spiro, wrote to the agency on September 13, saying Musk had “already sat for testimony twice in this matter” and that “enough is enough.”

    Spiro’s letter, which was included as an exhibit in the SEC’s court filings, accused regulators of seeking Musk’s testimony in bad faith and attempting to waste Musk’s time.

    In addition, Spiro claimed that the recent release of Walter Isaacson’s biography of Musk would interfere because it contained “new information potentially relevant to this matter” that would take time for both sides to digest.

    [ad_2]

    Source link

  • Microsoft Outlook will soon write emails for you | CNN Business

    Microsoft Outlook will soon write emails for you | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Artificial intelligence could soon be writing more company emails in Microsoft Outlook, as the company expands its rollout of AI tools for corporate users.

    The Microsoft 365 Copilot tool – “your everyday AI companion,” as the company bills it – will help users write their emails to “keep your sentences concise and error-free.” The tool also summarizes long email threads to quickly draft suggested replies.

    Users with Microsoft 365 Personal or Family subscriptions will get more advanced AI help through Microsoft Editor, an intelligent writing assistant. The update will include suggested edits for “clarity, conciseness, inclusive language and more” to help workers create more “polished and professional” emails, according to a blog post from the company in September.

    The company said the tool will be available to more corporate clients starting on November 1. It has already been in months-long testing with customers including Visa, General Motors, KPMG and Lumen Technologies.

    In March, Microsoft outlined its plans to bring artificial intelligence to its most recognizable productivity tools, including Outlook, PowerPoint, Excel and Word, with the promise of changing how millions do their work every day. The addition of its AI-powered “copilot” – which will help edit, summarize, create and compare documents – is built on the same technology that underpins ChatGPT.

    In addition to writing emails, Microsoft 365 users will be able to summarize meetings and create suggested follow-up action items, request to create a specific chart in Excel, and turn a Word document into a PowerPoint presentation in seconds.

    Corporate customers will also get to use Microsoft 365 Chat, previously called Business Chat, which can scan the internet and employee emails, meetings, chats and files, to behave as a sort of personalized secretary.

    The expansion will come less than a year after OpenAI publicly released viral AI chat tool ChatGPT, which stunned many users with its impressive ability to generate original essays, stories and song lyrics in response to user prompts. The initial wave of attention on the tool helped renew an arms race among tech companies to develop and deploy similar AI tools in their products.

    In the months since, many other companies have rolled out features underpinning or similar to the technology. Microsoft rival Google, for example, has also brought AI to its productivity tools, including Gmail, Sheets and Docs.

    [ad_2]

    Source link

  • Portable hotspots arrive in Maui to bring internet to residents and tourists | CNN Business

    Portable hotspots arrive in Maui to bring internet to residents and tourists | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Portable mobile hotspots have arrived in Maui to help bring internet service to the thousands of people who may have been unable to call for help since the wildfires started to rage out of control on the island.

    Verizon told CNN on Thursday its teams are currently deploying the first batch of satellite-based mobile hotspots at evacuation sites in areas of greatest need, particularly the west side of the island, west of Maalaea, Lahaina and Northern Kapalua.

    Verizon’s larger equipment, which is being barged over from Honolulu, is expected to arrive later in the day. This includes COLTs (Cells on Light Trucks) — a mobile site on wheels that connects to a carrier’s service via a satellite link — and a specialized satellite trailer used to provide service to a cell site that has a damaged fiber connection.

    “Our team is closely monitoring the situation on the ground and our network performance,” a Verizon spokesperson told CNN. “Verizon engineers on the island are working to restore service in impacted areas as quickly and safely as possible.”

    The company said it is working closely with the Hawaii Emergency Management Agency and the Maui County Emergency Operations Center to prioritize its network recovery.

    Other carriers continue to mobilize their efforts, too. An AT&T spokesperson said it is working with local public safety officials to deploy SatCOLTs (Satellite Cells on Light Trucks), drones with cell support and other solutions across the island, as equipment comes in from neighboring islands.

    Meanwhile, a T-Mobile spokesperson said its cell sites are “holding up well during the fires” but commercial power outages may be disrupting the service for some customers. “As soon as conditions allow, our priority is to deploy teams with portable generators that will bring temporary power back to our sites,” the spokesperson said.

    The Maui disaster has already wiped out power to at least 14,000 homes and businesses in the area, according to PowerOutage.us. Many cell towers have backup power generators but they have limited capacity to keep towers running.

    “911 is down. Cell service is down. Phone service is down,” Hawaii Lt. Gov. Sylvia Luke told CNN on Wednesday morning.

    Verizon, T-Mobile and AT&T said they are waiving call, text and data overage charges for Maui residents during this time.

    Although strong winds can sometimes threaten cell towers, most are strong enough to handle the worst that even a Category 5 hurricane can bring. Fire, however, complicates the issue.

    “When the fires get too close to cell sites, they will obviously burn equipment, antennas, and feedlines,” said Glenn O’Donnell, VP of research at market research firm Forrester. “In extreme cases, they will also weaken the towers, leading some to collapse. The smoke and flames can also attenuate [reduce the strength of] signals because of the particulate density in the air.”

    If a tower collapses, cell networks could take months to be restored. But if carriers are able and prepared to do restorations with mobile backup units, it could bring limited service back within hours, O’Donnell said. Wireless carriers often bring in COWs (Cells On Wheels), COLTs and GOaTs (Generator on a Trailer) in emergencies to provide backup service when cell towers go down.

    Cell towers have backup technology built in, but this is typically done through optical fiber cables or microwave (wireless) links, according to Dimitris Mavrakis, senior researcher at ABI Research. However, if something extraordinary happens, such as interaction with rampant fires, these links may experience “catastrophic failures and leave cells without a connection to the rest of the world.”

    And, in an emergency, a spike in call volume can overload the system — if people are able to get reception.

    “Even cells that have a good service may experience outages due to the sheer volume of communication happening at once,” Mavrakis said. “Everyone in these areas may be trying to contact relatives or the authorities at once, saturating the network and causing an outage. This is easier to correct, though, and network operators may put in place additional measures to render them operational quickly.”

    Although it’s unclear how long cell phone service could be down in affected regions, companies have been able to bring connectivity to disaster regions in the past. In 2017, Google worked with AT&T and T-Mobile to deploy its Project Loon balloons to deliver internet service to Puerto Rico in the aftermath of Hurricane Maria.

    Project Loon has since shut down.

    [ad_2]

    Source link

  • Snapchat users freak out over AI bot that had a mind of its own | CNN Business

    Snapchat users freak out over AI bot that had a mind of its own | CNN Business

    [ad_1]



    CNN
     — 

    Snapchat users were alarmed on Tuesday night when the platform’s artificial intelligence chatbot posted a live update to its profile and stopped responding to messages.

    The Snapchat My AI feature — which is powered by the viral AI chatbot tool ChatGPT — typically offers recommendations, answers questions and converses with users. But posting a live Story (a short video of what appeared to be a wall) for all Snapchat users to see was a new one: It’s a capability typically reserved for only its human users.

    The app’s fans were quick to share their concerns on social media. “Why does My AI have a video of the wall and ceiling in their house as their story?” wrote one user. “This is very weird and honestly unsettling.” Another user wrote after the tool ignored his messages: “Even a robot ain’t got time for me.”

    Turns out, this wasn’t Snapchat working to make its My AI tool even more realistic. The company told CNN on Wednesday it was a glitch. “My AI experienced a temporary outage that’s now resolved,” a spokesperson said.

    Still, the strong reaction highlighted the fears many people have about the potential risks of artificial intelligence.

    Since launching in April, the tool has faced backlash not only from parents but from some Snapchat users with criticisms over privacy concerns, “creepy” exchanges and an inability to remove the feature from their chat feed unless they pay for a premium subscription.

    Unlike some other AI tools, Snapchat’s version has some key differences: Users can customize the chatbot’s name, design a custom Bitmoji avatar for it and bring it into conversations with friends. The net effect is that conversing with Snapchat’s chatbot may feel less transactional than visiting ChatGPT’s website. It also may be less clear that you’re talking to a computer.

    While some may find value in the tool, the mixed reaction hinted at the challenges companies face in rolling out new generative AI technology to their products, and particularly in products like Snapchat, whose users skew younger.

    Snapchat was an early launch partner when OpenAI opened up access to ChatGPT to third-party businesses, with many more expected to follow.

    [ad_2]

    Source link

  • Appeals court says Biden admin likely violated First Amendment but narrows order blocking officials from communicating with social media companies | CNN Politics

    Appeals court says Biden admin likely violated First Amendment but narrows order blocking officials from communicating with social media companies | CNN Politics

    [ad_1]



    CNN
     — 

    A federal appeals court on Friday said the Biden administration likely violated the First Amendment in some of its communications with social media companies, but also narrowed a lower court judge’s order on the matter.

    The US 5th Circuit Court of Appeals ruled that certain administration officials – namely in the White House, the surgeon general, the US Centers for Disease Control and Prevention, and the Federal Bureau of Investigation – likely “coerced or significantly encouraged social media platforms to moderate content” in violation of the First Amendment in its efforts to combat Covid-19 disinformation.

    But the three-judge panel said the preliminary injunction issued by US District Judge Terry Doughty in July, which ordered some Biden administration agencies and top officials not to communicate with social media companies about certain content, was “both vague and broader than necessary to remedy the Plaintiffs’ injuries, as shown at this preliminary juncture.”

    The Biden administration had previously argued in the lawsuit brought by Republican attorneys general claiming unconstitutional censorship that channels with social media companies must stay open so that the federal government can help protect the public from threats to election security, Covid-19 misinformation and other dangers.

    In briefs submitted earlier this summer, the administration wrote, “There is a categorical, well-settled distinction between persuasion and coercion,” adding that Doughty had “equated legitimate efforts at persuasion with illicit efforts to coerce.”

    The 5th Circuit left in place part of the injunction that barred certain Biden administration officials from “threatening, pressuring, or coercing social-media companies in any manner to remove, delete, suppress, or reduce posted content of postings containing protected free speech.”

    “But,” the appeals court said, “those terms could also capture otherwise legal speech. So, the injunction’s language must be further tailored to exclusively target illegal conduct and provide the officials with additional guidance or instruction on what behavior is prohibited.”

    The appeals court reversed several aspects of Doughty’s sweeping order, concluding that those pieces of it risked blocking the federal government “from engaging in legal conduct.”

    The 5th circuit left the order, which had been temporarily blocked earlier in the summer, on pause for 10 days so that the case can be appealed to the Supreme Court.

    The opinion was handed down jointly by Circuit Judges Edith Clement, Jennifer Walker Elrod and Don Willett – all appointees of Republican presidents.

    The conservative appeals court sided with many of the arguments put forward by the plaintiffs, which included private individuals as well Missouri and Louisiana, but also narrowed the injunction’s scope so that it only applied to the White House, the surgeon general, the CDC and the FBI. Doughty had included other agencies in his July order.

    This story has been updated with additional information.

    [ad_2]

    Source link

  • The iPhone’s new Action Button is more than a one-trick pony | CNN Business

    The iPhone’s new Action Button is more than a one-trick pony | CNN Business

    [ad_1]



    CNN
     — 

    The new iPhone 15 Pro lineup offers the typical slate of new features designed to persuade customers to upgrade: They’re slimmer and thinner than last year’s crop. The new cameras are professional-grade and the switch to USB-C charging will make your life easier.

    But one new feature easily stands out: The Action Button.

    Apple has repurposed its physical mute button on the side of its high-end models into a more customizable tool, allowing users to carry out a handful of commands, from recording a voice memo and taking a picture to turning on the flashlight. The button can also be programmed to launch any app or shortcut, essentially turning it into a remote control or launching pad to gain quick access to something you want on demand.

    In the days since Apple’s iPhone 15 event at its Cupertino, California, headquarters, I’ve used it to load a variety of apps in a single press, including CNN, Amazon and Instagram. It’s certain the Action Button will become a viable resource for anyone who revisits an app time and time again throughout the day.

    But it also has the potential to become an even more powerful tool; you could program it to play your favorite playlist, turn on the smart lights in your living room or use it to open the garage door. You could even turn it into a dedicated button to call mom. It builds on iOS’s existing offering of ready-made or custom shortcuts, and Apple is encouraging developers to build other unique shortcuts that other users could activate on the Action Button.

    The change is subtle but it’s one of the few noticeable tweaks to the iPhone’s design this year. The Action Button is about the same size as the existing button, and users still hold it down to switch between muting and turning on the ringer. Commands are accompanied with visual feedback from the Dynamic Island barhome to alerts and notifications at the top of the screen.

    The Action Button update, along with changes in the phone’s charging and camera systems, comes as Apple looks to give consumers more reasons to upgrade their iPhones. Last month, Apple’s sales fell for the third consecutive quarter. iPhone revenue came in at $39.7 billion for the quarter, marking an approximately 2% year-over-year decline, as people update their devices less often.

    Another selling point to splurge for the iPhone 15 Pro ($1,099) or iPhone 15 Pro Max ($1,199): The phones come with a titanium casing — the same alloy used to build the Mars Rover — making them what Apple calls the thinnest and lightest Pro models to date. Apple’s entry level iPhones, the iPhone 15 and iPhone 15 Plus, cost $799 and $899, respectively. The entire lineup starts shipping on Friday.

    To program the Action Button, iPhone 15 Pro users can visit the button’s section in Settings, scroll through and select from a series of functionalities — such as flashlight or camera. By picking the shortcuts option, however, users can sift through their list of apps or previously established commands.

    Once set, there’s a slight learning curve following years of falling into the habit of using the physical button to turn the volume on and off. For this reason, it could take quite a while for some of iPhone’s loyalists to change how they use the device.

    The Action Button isn’t entirely new; the company unveiled it last year on the Apple Watch Ultra. Apple told CNN it was inspired to bring it to the iPhone after hearing anecdotes from users who said they consistently leave their phone on silent, rendering that button essentially useless. Considering iPhone usage has changed a lot since the iPhone debuted 16 years ago, revisiting a hallmark feature like the mute button was only a matter of time, according to the company.

    Ramon Llamas, a director at market research firm IDC, believes last week’s announcement is only the first step toward making the Action Button more dynamic. “I’d like to think that the Action Button could be expanded a bit more, like one click will take you to one feature; two clicks takes you to another, and three clicks gets you something else,” Llamas said. “But I think that would be it. Any more than that and you risk launching the wrong app, like Wordle, at the wrong time (when you need your camera the most),” he said.

    It’s also a strategic way for Apple to make the most of already tight real estate on the device, according to Llamas. Annette Zimmerman, a VP analyst with market research firm Gartner, agrees, noting that “having one button to do exactly one thing isn’t really progressive in a time where everything has multi-functionality and can be programmed.”

    Although it’s unclear if the Action Button will come to more devices in the future, Apple continues its sweep to create a uniform ecosystem for its consumers. Similarly, Apple is adding a Double Tap feature that allows people to use a finger feature to control the Apple Watch, just months after it showed off a similar gesture on its upcoming Vision Pro mixed reality headset.

    For now, iPhone 15 Pro users will enjoy playing with the new Action Button. While the feature is not worth the upgrade to iPhone 15 alone, the switch to universal charging, a faster processor and advanced camera capabilities make it a solid package, especially if you haven’t upgraded in the last few years.

    [ad_2]

    Source link

  • Indonesia bans e-commerce transactions on social media in major blow to TikTok | CNN Business

    Indonesia bans e-commerce transactions on social media in major blow to TikTok | CNN Business

    [ad_1]


    Jakarta
    Reuters
     — 

    Indonesia has banned e-commerce transactions on social media platforms, the trade minister said on Wednesday, in a blow to short video app TikTok, which is doubling down on Southeast Asia’s biggest economy to boost its e-commerce business.

    The government said the move, which takes effect immediately, is aimed at protecting offline merchants and marketplaces, adding that predatory pricing on social media platforms is threatening small and medium-sized enterprises.

    The move comes just three months after TikTok pledged to invest billion of dollars in Southeast Asia, mainly in Indonesia, over the next few years in a major push to build its e-commerce platform TikTok Shop.

    TikTok, owned by China’s ByteDance, has 125 million active monthly users in Indonesia and has been looking to translate the large user base into a major e-commerce revenue source.

    A TikTok Indonesia spokesperson said it would pursue a constructive path forward and was “deeply concerned” with the announcement, “particularly how it would impact the livelihoods of the 6 million” local sellers active on TikTok Shop.

    Indonesia Trade Minister Zulkifli Hasan on Wednesday told reporters that the regulation is intended to ensure “fair and just” business competition, adding that it was also intended to ensure data protection of users.

    He warned of letting social media become an e-commerce platform, shop and bank all at the same time.

    The new regulation also requires e-commerce platforms in Indonesia to set a minimum price of $100 for certain items that are directly purchased from abroad, according to the regulation document reviewed by Reuters, and that all products offered should meet local standards.

    Zulkifli said TikTok had one week to comply with the regulation or face the threat of closure. Indonesia Deputy Trade Minister Jerry Sambuaga earlier this month named TikTok’s live streaming features as an example of people selling goods on social media.

    Research firm BMI said TikTok would be the only business affected by the transaction ban and the move was unlikely to harm the digital marketplace industry’s growth.

    Indonesia’s e-commerce market is dominated by the likes of homegrown tech firm GoTo’s Tokopedia, Sea’s Shopee and Chinese e-commerce giant Alibaba’s Lazada.

    E-commerce transactions in Indonesia amounted to nearly $52 billion last year and of that, 5% took place on TikTok, according to data from consultancy Momentum Works.

    Indonesia is among the few markets where TikTok has launched TikTok Shop, as it seeks to leverage its large user base in the country.

    Its 125 million active monthly users in Indonesia is almost on par with its user figures for Europe and behind US users of more than 150 million. TikTok launched an online shopping service in the United States earlier this month.

    Reactions from retailers were mixed.

    Fahmi Ridho, a vendor selling clothes on TikTok, said the platform was a way for stores to recover from the blow dealt by the Covid-19 pandemic.

    “Sales don’t have to be necessarily through [brick and mortar] shops, you can do it online or wherever,” he said “Everything will still have a portion.”

    But Edri, who goes by one name only and sells clothes at a major wholesale market in Jakarta, agreed with the regulation and stressed that there should be limits on items sold online.

    [ad_2]

    Source link

  • SoftBank CEO says artificial general intelligence will come within 10 years | CNN Business

    SoftBank CEO says artificial general intelligence will come within 10 years | CNN Business

    [ad_1]


    Tokyo
    Reuters
     — 

    SoftBank CEO Masayoshi Son said he believes artificial general intelligence (AGI), artificial intelligence that surpasses human intelligence in almost all areas, will be realized within 10 years.

    Speaking at the SoftBank World corporate conference, Son said he believes AGI will be ten times more intelligent than the sum total of all human intelligence. He noted the rapid progress in generative AI that he said has already exceeded human intelligence in certain areas.

    “It is wrong to say that AI cannot be smarter than humans as it is created by humans,” he said. “AI is now self learning, self training, and self inferencing, just like human beings.”

    Son has spoken of the potential of AGI — typically using the term “singularity” — to transform business and society for some years, but this is the first time he has given a timeline for its development.

    He also introduced the idea of “Artificial Super Intelligence” at the conference which he claimed would be realized in 20 years and would surpass human intelligence by a factor of 10,000.

    Son is known for several canny bets that have turned SoftBank into a tech investment giant as well as some bets that have spectacularly flopped.

    He’s also prone to making strident claims about the transformative impact of new technologies. His predictions about the mobile internet have been largely borne out while those about the Internet of Things have not.

    Son called upon Japanese companies to “wake up” to the promise of AI, arguing they had increasingly fallen behind in the internet age and reiterated his belief in chip designer Arm as core to the “AI revolution.”

    Arm CEO Rene Haas, speaking at the conference via video, touted the energy efficiency of Arm’s designs, saying they would become increasingly sought after to power artificial intelligence.

    Son said he thinks he is the only person who believes AGI will come within a decade. Haas said he thought it would come in his lifetime.

    [ad_2]

    Source link

  • TikTok steps up efforts to counter misinformation about Israel-Hamas war | CNN Business

    TikTok steps up efforts to counter misinformation about Israel-Hamas war | CNN Business

    [ad_1]


    London
    CNN
     — 

    TikTok is stepping up efforts to counter misinformation, incitement to violence and hate relating to the Israel-Hamas war on its online platform, it announced Sunday, days after the European Union (EU) warned social media companies they risked falling foul of the bloc’s content moderation laws.

    As part of its measures, TikTok is launching a command center to coordinate the work of its “safety professionals” around the world, improving the software it uses to automatically detect and remove graphic and violent content, and hiring more Arabic and Hebrew speakers to moderate content.

    TikTok said in a statement that, following the brutal attack by Hamas on Israeli civilians on October 7, it had “immediately mobilized significant resources and personnel to help maintain the safety of [its] community and integrity of [its] platform.”

    “We do not tolerate attempts to incite violence or spread hateful ideologies,” it added. “We have a zero-tolerance policy for content praising violent and hateful organizations and individuals.”

    The firm, owned by China’s ByteDance, said it had already removed more than 500,000 videos and shut down 8,000 livestream videos from the “impacted region” since the Hamas attack.

    As the conflict escalates — Israel has blocked the provision of electricity, food, fuel and water to Gaza, and has been signaling it is preparing for a ground invasion of the area — millions have turned to social media for updates, while misinformation has proliferated on these sites.

    One recent TikTok video, seen by more than 300,000 users and reviewed by CNN, promoted conspiracy theories about the origins of the Hamas attack, including false claims that it was orchestrated by the media.

    Last week, the EU told social media companies they needed to better protect “children and teenagers from violent content and terrorist propaganda” on their platforms.

    EU Commissioner Thierry Breton wrote to TikTok Thursday, in a letter shared on X, the platform formerly known as Twitter, saying the company had 24 hours to detail the steps it was taking to comply with EU rules on content moderation. Breton has sent similar letters to X, Google and Meta, the owner of Instagram and Facebook.

    [ad_2]

    Source link

  • Biden administration defends communications with social media companies in high-stakes court fight | CNN Business

    Biden administration defends communications with social media companies in high-stakes court fight | CNN Business

    [ad_1]


    Washington, DC
    CNN
     — 

    The Biden administration on Thursday defended its communications with social media giants in court, arguing those channels must stay open so that the federal government can help protect the public from threats to election security, Covid-19 misinformation and other dangers.

    The closely watched court fight reflects how social media has become an informational battleground for major social issues. It has revealed the messy challenges for social media companies as they try to manage the massive amounts of information on their platforms.

    And it has highlighted warnings by independent researchers, watchdog groups and government officials that malicious actors will continue to try to disrupt the country’s democracy by flooding the internet with bogus and divisive material ahead of the 2024 elections.

    In oral arguments before a New Orleans-based federal appeals court, the US government challenged a July injunction that blocked several federal agencies from discussing certain social media posts and sharing other information with online platforms, amid allegations by state governments that those communications amounted to a form of unconstitutional censorship.

    The appeals court last month temporarily blocked the injunction from taking effect. But the outcome of Thursday’s arguments will determine the ultimate fate of the order, which placed new limits on the Departments of Homeland Security, Health and Human Services and other federal agencies’ ability to coordinate with tech companies and civil society groups.

    If upheld by the US Court of Appeals for the Fifth Circuit, the injunction would suppress a broad range of public-private partnerships and undermine the US government’s mission to protect the public, the Biden administration argued.

    “For example, if there were a natural disaster, and there were untrue statements circulating on social media that were damaging to the public interest, the government would be powerless under the injunction to discourage social media companies from further disseminating those incorrect statements,” said Daniel Tenny, a Justice Department lawyer.

    Now, a three-judge panel of the Fifth Circuit is set to decide how executive agencies may respond to those threats.

    At issue is whether the US government unconstitutionally pressured social media platforms into censoring users’ speech, particularly when the government flagged posts to the platforms that it believed violated the companies’ own terms of service.

    During more than an hour of oral arguments Thursday, the three judges handling the appeal gave little indication of how they would rule in the case, with one judge asking just a couple of questions during the hearing. The other two spent much of the time pressing attorneys for the Biden administration and the plaintiffs in the case on issues concerning the scope of the injunction and whether the states even had the legal right – or standing – to bring the lawsuit.

    Before them is not only the request to reverse the lower court injunction, but also one from the administration to issue a more lasting pause on that injunction while the judges weigh the challenge to it.

    In briefs submitted to the court ahead of Thursday’s hearing, the Biden administration argued that a lower court judge was wrong to have identified the government communications with social media companies as potentially, in his words, “the most massive attack against free speech in United States’ [sic] history.”

    “There is a categorical, well-settled distinction between persuasion and coercion,” the administration’s lawyers wrote, adding that the lower court “equated legitimate efforts at persuasion with illicit efforts to coerce.”

    The administration’s opponents in the case, which include the states of Missouri and Louisiana, have argued that the federal government’s communications with social media companies are a violation of the First Amendment because even “‘encouragement short of compulsion’ can transform private conduct [by social media companies] into government action” that infringes on users’ speech rights.

    “Every one of these federal agencies has insinuated themselves into the content moderation decisions of major social media platforms,” D. John Sauer, an attorney representing the state of Louisiana, told the judges on Thursday. Hypothetically speaking, he added: “The Surgeon General can say, ‘All this speech is terrible, it’s awful.’ …. But what he can’t do is pick up the phone and say, ‘Take it down.’”

    In addition to the states, five individuals are also plaintiffs in the suit. They include three doctors who have been critical of state and federal pandemic-era restrictions, a Louisiana woman who claims she was censored by social media companies for her online criticisms of Covid health measures and a man who runs a far-right website known for pushing conspiracy theories.

    Much of Thursday’s oral arguments hinged on the definition of coercive communication and how courts have analyzed government pressure against private parties in past cases.

    But the states also claimed that there could be a pathway to finding a constitutional violation if the court agreed that social media companies, in heeding the administration’s calls to action, had been effectively turned into agents of the US government.

    In the past month, after District Judge Terry Doughty issued his injunction, current and former US officials, along with outside researchers and academics, have worried that the order could lead to a chilling effect for efforts to protect US elections.

    “There is no serious dispute that foreign adversaries have and continue to attempt to interfere in our elections and that they use social media to do it,” FBI Director Christopher Wray testified to the House Judiciary Committee in July. “President Trump himself in 2018 declared a national emergency to that very effect, and the Senate Intelligence Committee — in a bipartisan, overwhelmingly bipartisan way — not only found the same thing but called for more information-sharing between us and the social media.”

    Ohio Republican Rep. Jim Jordan, the panel’s chair, remains unconvinced. Earlier this week, he and other Republican lawmakers filed their own brief to the appeals court, accusing the Biden administration of a campaign to stifle speech.

    “On issue after issue, the Biden Administration has distorted the free marketplace of ideas promised by the First Amendment, bringing the weight of federal authority to bear on any speech it dislikes—including memes and jokes,” Jordan and the other lawmakers wrote. “Of course, Big Tech companies often required little coercion to do the Administration’s bidding on some issues. Generally eager to please their ideological allies and overseers in the federal government, these companies and other private entities have repeatedly censored accurate speech on important public issues.”

    [ad_2]

    Source link

  • Schools are teaching ChatGPT, so students aren’t left behind | CNN Business

    Schools are teaching ChatGPT, so students aren’t left behind | CNN Business

    [ad_1]


    New York
    CNN
     — 

    When college administrator Lance Eaton created a working spreadsheet about the generative AI policies adopted by universities last spring, it was mostly filled with entries about how to ban tools like ChatGPT.

    But now the list, which is updated by educators at both small and large US and international universities, is considerably different: Schools are encouraging and even teaching students how to best use these tools.

    “Earlier on, we saw a kneejerk reaction to AI by banning it going into spring semester, but now the talk is about why it makes sense for students to use it,” Eaton, an administrator at Rhode Island-based College Unbound, told CNN.

    He said his growing list continues to be discussed and shared in popular AI-focused Facebook groups, such as Higher Ed Discussions of Writing and AI, and the Google group AI in Education.

    “It’s really helped educators see how others are adapting to and framing AI in the classroom,” Eaton said. “AI is still going to feel uncomfortable, but now they can now go in and see how a university or a range of different courses, from coding to sociology, are approaching it.”

    With more experts expecting the continued application of artificial intelligence, professors now fear ignoring or discouraging the use of it will be a disservice to students and leave many behind when entering the workforce.

    Since it was made available in late November, ChatGPT has been used to generate original essays, stories and song lyrics in response to user prompts. It has drafted research paper abstracts that fooled some scientists and passed exams at esteemed universities. The technology, and similar tools such as Google’s Bard, is trained on vast amounts of online data in order to generate responses to user prompts. While they gained traction among users, the tools also raised some concerns about inaccuracies, cheating, the spreading of misinformation and the potential to perpetuate biases.

    According to a study conducted by higher education research group Intelligent.com, about 30% of college students used ChatGPT for schoolwork this past academic year and it was used most in English classes.

    Jules White, an associate professor of computer science at Vanderbilt University, believes professors should be explicit in the first few days of school about the course’s stance on using AI and that it should be included it in the syllabus.

    “It cannot be ignored,” he said. “I think it’s incredibly important for students, faculty and alumni to become experts in AI because it will be so transformative across every industry in demand so we provide the right training.”

    Vanderbilt is among the early leaders taking a strong stance in support of generative AI by offering university-wide training and workshops to faculty and students. A three-week 18-hour online course taught by White this summer was taken by over 90,000 students, and his paper on “prompt engineering” best practices is routinely cited among academics.

    “The biggest challenge is with how you frame the instructions, or ‘prompts,’” he said. “It has a profound impact on the quality of the response and asking the same thing in various ways can get dramatically different results. We want to make sure our community knows how to effectively leverage this.”

    Prompt engineering jobs, which typically require basic programming experience, can pay up to $300,000.

    Although White said concerns around cheating still exist, he believes students who want to plagiarize can still seek out other methods such as Wikipedia or Google searches. Instead, students should be taught that “if they use it in other ways, they will be far more successful.

    Diane Gayeski, a professor of communications at Ithaca College, said she plans to incorporate ChatGPT and other tools in her fall curriculum, similar to her approach in the spring. She previously asked students to collaborate with the tool to come up with interview questions for assignments, write social media posts and critique the output based on the prompts given.

    “My job is to prepare students for PR, communications and social media managers, and people in these fields are already using AI tools as part of their everyday work to be more efficient,” she said. “I need to make sure they understand how they work, but I do want them to cite when ChatGPT is being used.”

    Gayeski added that as long as there is transparency, there should be no shame in adopting the technology.

    Some schools are hiring outside experts to teach both faculty and students about how to use AI tools. Tyler Tarver, a former high school principal who now teaches educators about tech tool strategies, said he’s made over 50 speeches at schools and conferences across Texas, Arkansas and Illinois over the past few months. He also offers an online three-hour training for educators.

    “Teachers need to learn how to use it because even if they never use it, their students will,” Tarver said.

    Tarver said that he teaches students, for example, how the tools can be used to catch grammar mistakes, and how teachers can use it to assist with grading. “It can cut down on teacher bias,” Tarver said.

    He argues teachers could grade students a certain way even if they’ve improved over time. By running an assignment through ChatGPT, and asking it to grade the sentence structure on a scale from one to 10, the response could “service as a second pair of eyes to make sure they’re not missing anything,” Tarver said.

    “That shouldn’t be the final grade teachers shouldn’t use it to cheat or cut corners either but it can help inform grading,” he said. “The bottom line is that this is like when the car was invented. You don’t want to be the last person in the horse and buggy.”

    [ad_2]

    Source link

  • AI tools make things up a lot, and that’s a huge problem | CNN Business

    AI tools make things up a lot, and that’s a huge problem | CNN Business

    [ad_1]



    CNN
     — 

    Before artificial intelligence can take over the world, it has to solve one problem. The bots are hallucinating.

    AI-powered tools like ChatGPT have mesmerized us with their ability to produce authoritative, human-sounding responses to seemingly any prompt. But as more people turn to this buzzy technology for things like homework help, workplace research, or health inquiries, one of its biggest pitfalls is becoming increasingly apparent: AI models often just make things up.

    Researchers have come to refer to this tendency of AI models to spew inaccurate information as “hallucinations,” or even “confabulations,” as Meta’s AI chief said in a tweet. Some social media users, meanwhile, simply blast chatbots as “pathological liars.”

    But all of these descriptors stem from our all-too-human tendency to anthropomorphize the actions of machines, according to Suresh Venkatasubramanian, a professor at Brown University who helped co-author the White House’s Blueprint for an AI Bill of Rights.

    The reality, Venkatasubramanian said, is that large language models — the technology underpinning AI tools like ChatGPT — are simply trained to “produce a plausible sounding answer” to user prompts. “So, in that sense, any plausible-sounding answer, whether it’s accurate or factual or made up or not, is a reasonable answer, and that’s what it produces,” he said. “There is no knowledge of truth there.”

    The AI researcher said that a better behavioral analogy than hallucinating or lying, which carries connotations of something being wrong or having ill-intent, would be comparing these computer outputs to the way his young son would tell stories at age four. “You only have to say, ‘And then what happened?’ and he would just continue producing more stories,” Venkatasubramanian said. “And he would just go on and on.”

    Companies behind AI chatbots have put some guardrails in place that aim to prevent the worst of these hallucinations. But despite the global hype around generative AI, many in the field remain torn about whether or not chatbot hallucinations are even a solvable problem

    Simply put, a hallucination refers to when an AI model “starts to make up stuff — stuff that is not in-line with reality,” according to Jevin West, a professor at the University of Washington and co-founder of its Center for an Informed Public.

    “But it does it with pure confidence,” West added, “and it does it with the same confidence that it would if you asked a very simple question like, ‘What’s the capital of the United States?’”

    This means that it can be hard for users to discern what’s true or not if they’re asking a chatbot something they don’t already know the answer to, West said.

    A number of high-profile hallucinations from AI tools have already made headlines. When Google first unveiled a demo of Bard, its highly anticipated competitor to ChatGPT, the tool very publicly came up with a wrong answer in response to a question about new discoveries made by the James Webb Space Telescope. (A Google spokesperson at the time told CNN that the incident “highlights the importance of a rigorous testing process,” and said the company was working to “make sure Bard’s responses meet a high bar for quality, safety and groundedness in real-world information.”)

    A veteran New York lawyer also landed in hot water when he used ChatGPT for legal research, and submitted a brief that included six “bogus” cases that the chatbot appears to have simply made up. News outlet CNET was also forced to issue corrections after an article generated by an AI tool ended up giving wildly inaccurate personal finance advice when it was asked to explain how compound interest works.

    Cracking down on AI hallucinations, however, could limit AI tools’ ability to help people with more creative endeavors — like users that are asking ChatGPT to write poetry or song lyrics.

    But there are risks stemming from hallucinations when people are turning to this technology to look for answers that could impact their health, their voting behavior, and other potentially sensitive topics, West told CNN.

    Venkatasubramanian added that at present, relying on these tools for any task where you need factual or reliable information that you cannot immediately verify yourself could be problematic. And there are other potential harms lurking as this technology spreads, he said, like companies using AI tools to summarize candidates’ qualifications and decide who should move ahead to the next round of a job interview.

    Venkatasubramanian said that ultimately, he thinks these tools “shouldn’t be used in places where people are going to be materially impacted. At least not yet.”

    How to prevent or fix AI hallucinations is a “point of active research,” Venkatasubramanian said, but at present is very complicated.

    Large language models are trained on gargantuan datasets, and there are multiple stages that go into how an AI model is trained to generate a response to a user prompt — some of that process being automatic, and some of the process influenced by human intervention.

    “These models are so complex, and so intricate,” Venkatasubramanian said, but because of this, “they’re also very fragile.” This means that very small changes in inputs can have “changes in the output that are quite dramatic.”

    “And that’s just the nature of the beast, if something is that sensitive and that complicated, that comes along with it,” he added. “Which means trying to identify the ways in which things can go awry is very hard, because there’s so many small things that can go wrong.”

    West, of the University of Washington, echoed his sentiments, saying, “The problem is, we can’t reverse-engineer hallucinations coming from these chatbots.”

    “It might just an intrinsic characteristic of these things that will always be there,” West said.

    Google’s Bard and OpenAI’s ChatGPT both attempt to be transparent with users from the get-go that the tools may produce inaccurate responses. And the companies have expressed that they’re working on solutions.

    Earlier this year, Google CEO Sundar Pichai said in an interview with CBS’ “60 Minutes” that “no one in the field has yet solved the hallucination problems,” and “all models have this as an issue.” On whether it was a solvable problem, Pichai said, “It’s a matter of intense debate. I think we’ll make progress.”

    And Sam Altman, CEO of ChatGPT-maker OpenAI, made a tech prediction by saying he thinks it will take a year-and-a-half or two years to “get the hallucination problem to a much, much better place,” during remarks in June at India’s Indraprastha Institute of Information Technology, Delhi. “There is a balance between creativity and perfect accuracy,” he added. “And the model will need to learn when you want one or the other.”

    In response to a follow-up question on using ChatGPT for research, however, the chief executive quipped: “I probably trust the answers that come out of ChatGPT the least of anybody on Earth.”

    [ad_2]

    Source link

  • Google’s antitrust showdown: What’s at stake for the internet search titan | CNN Business

    Google’s antitrust showdown: What’s at stake for the internet search titan | CNN Business

    [ad_1]



    CNN
     — 

    Google will face off in court Tuesday against government officials who have accused the company of antitrust violations in its massive search business, kicking off a long-anticipated legal showdown that could reshape one of the internet’s most dominant platforms.

    The trial beginning this week in Washington before a federal judge marks the culmination of two ongoing lawsuits against Google that started during the Trump administration. Legal experts describe the actions as the country’s biggest monopolization case since the US government took on Microsoft in the 1990s.

    In separate complaints, the Justice Department and dozens of states accused Google in 2020 of abusing its dominance in online search by allegedly harming competition through deals with wireless carriers and smartphone makers that made Google Search the default or exclusive option on products used by millions of consumers. The complaints eventually consolidated into a single case.

    Google has maintained that it competes on the merits and that consumers prefer its tools because they are the best, not because it has moved to illegally restrict competition. Google’s search business provides more than half of the $283 billion in revenue and $76 billion in net income Google’s parent company, Alphabet, recorded in 2022. Search has fueled the company’s growth to a more than $1.7 trillion market capitalization.

    Now, the company is set to defend itself in a multiweek trial that could upend the way Google distributes its search engine to users. The case is expected to feature testimony from high-profile witnesses including former employees of Google and Samsung, along with executives from Apple, including senior vice president Eddy Cue. It is the first case to go to trial in a series of court challenges targeting Google’s far-reaching economic power, testing the willingness of courts to clamp down on large tech platforms.

    “This is a backwards-looking case at a time of unprecedented innovation,” said Google President of Global Affairs Kent Walker, “including breakthroughs in AI, new apps and new services, all of which are creating more competition and more options for people than ever before. People don’t use Google because they have to — they use it because they want to. It’s easy to switch your default search engine — we’re long past the era of dial-up internet and CD-ROMs.”

    The trial may also be a bellwether for the more assertive antitrust agenda of the Biden administration.

    In its initial complaint, the US government alleged in part that Google pays billions of dollars a year to device manufacturers including Apple, LG, Motorola and Samsung — and browser developers like Mozilla and Opera — to be their default search engine and in many cases to prohibit them from dealing with Google’s competitors.

    As a result, the complaint alleges, “Google effectively owns or controls search distribution channels accounting for roughly 80 percent of the general search queries in the United States.”

    The lawsuit also alleges that Google’s Android operating system deals with device makers are anticompetitive, because they require smartphone companies to pre-install other Google-owned apps, such as Gmail, Chrome or Maps.

    At the time the lawsuit was first filed, US antitrust officials did not rule out the possibility of a Google breakup, warning that Google’s behavior could threaten future innovation or the rise of a Google successor.

    Separately, a group of states, led by Colorado, made additional allegations against Google, claiming that the way Google structures its search results page harms competition by prioritizing the company’s own apps and services over web pages, links, reviews and content from other third-party sites.

    But the judge overseeing the case, Judge Amit Mehta in the US District Court for the District of Columbia, tossed out those claims in a ruling last month, narrowing the scope of allegations Google must defend and saying the states had not done enough to show a trial was necessary to determine whether Google’s search results rankings were anticompetitive.

    Despite that ruling, the trial represents the US government’s furthest progress in challenging Google to date. Mehta has said Google’s pole position among search engines on browsers and smartphones “is a hotly disputed issue” and that the trial will determine “whether, as a matter of actual market reality, Google’s position as the default search engine across multiple browsers is a form of exclusionary Conduct.”

    In January, meanwhile, the Biden administration launched another antitrust suit against Google in opposition to the company’s advertising technology business, accusing it of maintaining an illegal monopoly. That case remains in its early stages at the US District Court for the Eastern District of Virginia.

    [ad_2]

    Source link

  • Epic Games to lay off 16% of its workforce | CNN Business

    Epic Games to lay off 16% of its workforce | CNN Business

    [ad_1]



    CNN
     — 

    Epic Games, the maker of Fortnite, said on Thursday that it will lay off 16% of its staff, around 830 employees, as it attempts to reverse what CEO Tim Sweeney called “unrealistic” spending.

    In a letter to employees Thursday, Sweeney said the video game company had been “spending way more money than we earn, investing in the next evolution of Epic.”

    “I had long been optimistic that we could power through this transition without layoffs, but in retrospect I see that this was unrealistic,” Sweeney said in the letter, which the company shared publicly. He added that Epic plans to divest from the online independent music platform Bandcamp, which it bought last year and which will now be acquired by the music marketplace firm Songtradr. Epic will also spin off most of its marketing division SuperAwesome into a standalone company.

    Epic’s layoffs are just the latest job cuts to hit the tech industry, which was forced to adjust after the stunning growth many companies saw during the height of the Covid-19 pandemic began to slow. Meta, Microsoft, T-Mobile, Lyft and others have all reduced their workforces earlier this year. More recently, Google parent Alphabet made its second round of layoffs of the year, eliminating several hundred recruiting jobs in September after having cut 12,000 employees in January.

    About two-thirds of Epic’s Thursday layoffs will impact employees outside the company’s “core development” teams, Sweeney said. Some laid off workers announced on LinkedIn that they had been affected, including employees working in user experience for Fortnite, production, employee engagement and recruitment.

    Laid off employees will receive a severance offer that includes six months of base pay, accelerated stock vesting and other benefits, according to Sweeney.

    “We’re cutting costs without breaking development or our core lines of businesses so we can continue to focus on our ambitious plans,” Sweeney said. “Some of our products and initiatives will land on schedule, and some may not ship when planned because they are under-resourced for the time being. We’re ok with the schedule tradeoff if it means holding on to our ability to achieve our goals.”

    The Epic layoffs also come amid the latest escalation in a protracted legal battle between the video game company and tech giant Apple. Following a yearslong back-and-forth over an antitrust lawsuit brought by Epic over Apple’s App Store payment practices, both companies have asked the US Supreme Court to review a lower court ruling in the case.

    [ad_2]

    Source link

  • Google unveils Pixel 8 built for ‘the generative AI era’ | CNN Business

    Google unveils Pixel 8 built for ‘the generative AI era’ | CNN Business

    [ad_1]



    CNN
     — 

    There’s nothing particularly new about Google’s latest-generation Pixel 8 smartphone hardware. That’s why the company is pushing hard to tout its AI-powered new software, which Google says was built specifically for the “first phone of the generative AI era.”

    At a press event in New York City, Google

    (GOOG)
    showed off the new Pixel 8 and Pixel 8 Pro devices, which largely look the same as the year prior, albeit with more rounded edges. But inside, its new G3 Tensor chip unlocks an AI-powered world aimed at simplifying your life, from asking the device to summarize news articles and websites to using Google

    (GOOG)
    Assistant to field phone calls and tweaking photos to move or resize objects.

    The 6.3-inch Pixel 8 and the 6.7-inch Pixel 8 Pro comes with a brighter display, new camera system and longer-lasting battery life. The Pixel 8 is available in three colors – hazel, rose and obsidian – and starts at $699, about $100 less than the baseline iPhone 14 with the same amount of storage. (That’s about $100 more than last year’s Pixel 7).

    Meanwhile, the Pixel 8 Pro – which touts a polished aluminum frame and a matte back glass this year – now has the ability to take better low-light photos and sharper selfies. It starts at $999 – the same price as the iPhone 15 Pro – and is available in three colors: bay, porcelain and obsidian.

    Although these upgrades are mostly incremental, the AI enhancements and related features may appeal to tech enthusiasts who want the latest version of Android and an alternative to Apple or Samsung smartphones.

    At the same time, Google’s Pixel line remains a niche product. Its global market share for smartphones remains about 1%, according to data from ABI Research. Google also limits sales to only a handful of countries, so keeping the volume low has been strategic as Google remains predominantly a software company with many partners running Android.

    Reece Hayden, an analyst at ABI Research, said Google is looking to establish itself as an early market leader amid the “generative AI-related hysteria,” which kicked into high gear late last year with the introduction of ChatGPT. Generative AI refers to a type of artificial intelligence that can create new content, such as text and images, in response to user prompts.

    “[Adding it to the Pixel] creates further product differentiation by leveraging internal capabilities that Apple may not have,” said Hayden.

    He expects this announcement to be the first of many similar efforts coming to hardware over the next year, especially among brands who’ve already made investments in this area.

    Here’s a closer look at what Google announced and some of the standout new AI features:

    A Google employee demonstrates manual focus features of the new Google Pixel 8 Pro Phone in New York City, U.S., October 4, 2023.

    Google showed off a handful of photo features coming to its Pixel line, including Magic Editor which uses generative AI to reposition and resize a subject. Similarly, a new Audio Magic Eraser tool that lets users erase distracting sounds from videos.

    Another tool called Best Take snaps a series of photos and then aggregates the faces into one shot so everyone looks their best. And a a new Zoom enhanced feature lets users pinch to zoom in about 30 times after a photo is taken to focus in on and edit a specific area.

    The company said these efforts aim to “let you capture every moment just how you want to remember it.”

    Although the tools intend to give users more control over their photos, some analysts like Thomas Husson at market research firm Forrester believe it will be harder to distinguish between what’s real and what’s not.

    “The fact that Google refers to a ‘Magic Eraser’ will blur the distinction between real photos and heavily edited ones,” Husson said. But he warns an uptick in deepfake apps already makes it hard to decipher the authenticity of some shots. “You don’t really need Google AI for that.”

    The company said Google Assistant will now sound more realistic when it engages with callers. Google’s screen call tool already lets Assistant field incoming calls, speak to callers and determine who’s on the line before pushing it through to the user. But its robotic voice will sound increasing more natural, the company said.

    Google is also bringing the capabilities of its Bard AI chatbot to Google Assistant, so it will be able to do more than set an alarm or tell the weather. With its new generative AI capabilities, it will be able to review important emails in a user’s inbox or reveal more about a hotel that popped up on their Instagram feed. Assistant will also be able to understand user questions in voice, text and images.

    “With generative AI on the scene, it’s really creating a lot of new opportunities to build an even more intuitive and intelligent and personalized digital assistant,” Sissie Hsiao, general manager for Google Assistant and Bard, told CNN.

    In addition to making Assistant more useful, the tool will make it easier for more users to interact with Google’s six-month-old Bard on interfaces they may already frequently engage with. Last month, Google rolled out a major expansion of Bard, allowing users to link the tool to their Gmail and other Google Workspace tools and making it easier to fact check the AI’s responses.

    Google launched Assistant with Bard to a small test group on Wednesday, and it will be more widely available to Android and iOS users in the coming months.

    AI is also getting smarter on the Pixel Watch 2 ($349), its second-generation smartwatch. Users can use Bard capabilities via an upgraded Google Assistant watch app to ask it how they slept and get other health insights.

    In addition, the Pixel 2 features a new heart rate sensor, which works alongside a new AI-driven heart rate algorithm, to provide a more accurate heart rate reading than before. But Hayden said he doesn’t think more AI will add too much more to its existing value proposition.

    “Smart watches already include a fair amount of AI, and Pixel is no different,” he said.

    [ad_2]

    Source link

  • Adobe previews new AI editing tools | CNN Business

    Adobe previews new AI editing tools | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Photo-editing software maker Adobe unveiled a slew of new AI-powered tools and features last week at its annual Max event, including a dress that transforms into a wearable screen and streamlined ways to delete elements from photos.

    The company previewed a series of prototype tools that make use of both generative AI and 3D image technology in the Adobe MAX Sneaks showcase. Covering photo, audio, video, 3D, fashion and design, the new capabilities are meant to give the public a sneak peak into early-stage ideas that might one day become widely used components of Adobe products.

    A highlight of the event was Adobe’s Project Primrose, an interactive dress that shifts into different colors and patterns as it’s worn.

    Other previewed items include a tool that automatically detects each object in an image and lets users perform a variety of tasks, labeled Project Stardust. For example, it can spot a suitcase within a photo to then be moved or deleted or predict and prompt likely tasks, such as deleting people from the background of an image.

    A screenshot of Project Stardust, a tool unveiled as part Adobe's annual

    Also on display was Project Dub Dub Dub, technology that can automatically dub audio over a video into all supported languages while preserving the speaker’s voice, as was a new tool that shows Adobe users what the ability to apply text-to-image generative AI tool Firefly to videos might look like.

    Adobe first began adding Firefly into a Photoshop beta app in May, with the goal of “dramatically accelerating” how users edit their photos. It allows users to add or delete elements from images with just a text prompt. It can also match the lighting and style of the existing images automatically, the company said.

    [ad_2]

    Source link

  • Illinois Supreme Court upholds state’s assault-style weapons ban | CNN Politics

    Illinois Supreme Court upholds state’s assault-style weapons ban | CNN Politics

    [ad_1]



    CNN
     — 

    The Illinois Supreme Court on Friday upheld the state’s assault-style weapons ban in a 4-3 ruling after months of legal challenges sought to dismantle the law.

    State lawmakers in January passed, and Democratic Gov. J.B. Pritzker signed into law, a measure to ban assault-style rifles and high-capacity magazines. Those who already own such rifles face limitations on their sale and transfer and must register them with the Illinois State Police by 2024.

    That law – which came about six months after the July 2022 Highland Park, Illinois, shooting – faced immediate lawsuits in state and federal court that argued it violated the Illinois and US constitutions.

    A Macon County Circuit Court judge found earlier this year that exemptions to the law, including for law enforcement officers and armed guards at federally supervised nuclear sites, violated the equal protection clause of the state’s constitution.

    The Illinois Supreme Court agreed to fast-track the state’s appeal, and in a 20-page opinion, reversed the circuit court’s judgment. The majority’s opinion claimed to focus on two core issues brought by the plaintiffs: Whether the law violated the plaintiffs’ right to equal protection and if it constituted special legislation that created laws for some firearms owners and not others. The majority opinion notably did not decide if the ban violated the Second Amendment, asserting that the plaintiffs had waived this issue.

    “We express no opinion on the potential viability of plaintiffs’ waived claim concerning the Second Amendment,” they wrote.

    However, one of the plaintiffs’ attorneys, Jerry Stocks, told CNN the majority justices misrepresented their arguments. Stocks said the Second Amendment is a fundamental right inextricably linked to their arguments and thus should have weighed heavily on scrutiny of the ban. Ignoring the issue altogether was improper, he said.

    “We have a circus in Illinois and the clowns are in charge right now,” Stocks said.

    Illinois Attorney General Kwame Raoul said the new law is a “critical part” of the state’s efforts to combat gun violence, and Pritzker’s office hailed the decision to uphold “a commonsense gun reform law to keep mass-killing machines off of our streets and out of our schools, malls, parks, and places of worship.”

    Nancy Rotering, the Democratic mayor of Highland Park, called on Congress to act on tougher federal restrictions and said Friday’s decision “sends a message to residents that saving lives takes precedence over thoughts and prayers and acknowledges the importance of sensible gun control measures.”

    Illinois has struggled to restrict the flow of illegal guns, particularly in Chicago, while officials in the state have faced legal hurdles to implementing new gun restrictions.

    Despite gun rights advocates challenging the assault-style weapons ban and asking the US Supreme Court to block the ban – along with a city ordinance passed last year by Naperville, Illinois, that bans the sale of assault rifles – the US Supreme Court in May refused to intervene.

    This story has been updated with additional details.

    [ad_2]

    Source link

  • Baidu and SenseTime launch ChatGPT-style AI bots to the public | CNN Business

    Baidu and SenseTime launch ChatGPT-style AI bots to the public | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    Chinese tech firms Baidu and SenseTime launched their ChatGPT-style AI bots to the public on Thursday, marking a new milestone in the global AI race.

    Baidu has opened public access to its ERNIE Bot, allowing users to conduct AI-powered searches or carry out an array of tasks, from creating videos to providing summaries of complex documents.

    The news sent its shares 3.1% higher in New York on Wednesday and 4.7% higher in Hong Kong on Thursday.

    Baidu (BIDU) is among the first companies in China to get regulatory approval for the rollout, and it is the first to launch this type of service publicly, according to a person familiar with the matter.

    Until Thursday, ERNIE Bot, also called “Wenxin Yiyan” in Chinese, had been offered only to corporate clients or select members of the public who requested access through a waitlist.

    Meanwhile, SenseTime, an AI startup based in Hong Kong, also announced the public launch of its SenseChat platform on Thursday. The company’s shares surged 4% in Hong Kong following the news

    “We are pleased to announce that starting today, it is fully available to serve all users,” a SenseTime spokesperson told CNN in a statement.

    China published new rules on generative AI in July, becoming one of the world’s first countries to regulate the industry. The measures took effect on August 15.

    Baidu has been a frontrunner in China in the race to capitalize on the excitement around generative artificial intelligence, the technology that underpins systems such as ChatGPT or its successor, GPT-4. The latter has impressed users with its ability to simplify coding, rapidly create a website from a simple sketch and pass exams with high marks.

    Baidu announced its own iteration in February, giving it an early advantage in China, according to analysts. It unveiled ERNIE a month later, showing how it could generate a newsletter, come up with a corporate slogan and solve a math riddle.

    Since then, competitors such as Alibaba (BABA) and SenseTime have announced plans to launch their own ChatGPT-style tools, adding to the list of Chinese businesses jumping on the bandwagon. Alibaba told CNN Thursday that it had filed for regulatory approval for its own bot, which was introduced in April.

    The company is now waiting to officially launch and “the initial list of companies that have received the approval is expected to be released by relevant local departments within one week,” said an Alibaba Cloud spokesperson.

    Some critics say the new offerings from Chinese firms will add fuel to an existing US-China rivalry in emerging technologies. Baidu CEO Robin Li has tried to shake off that comparison, saying previously that the company’s platform “is not a tool for the confrontation between China and the United States.”

    The firm’s new feature — which will be embedded in its popular search engine, among its other offerings — follows a similar feature introduced by Alphabet’s Google (GOOGL) in May, which allows users to search the web using its AI chatbot.

    Baidu says its service stands out because of its advanced grasp of Chinese queries, as well as its ability to generate different types of responses, such as text, images, audio and video.

    By comparison, GPT-4 is also able to analyze photos, but currently only generates text responses, according to its developer, OpenAI.

    While ERNIE Bot is available globally, its interface is in Chinese, though users will be able to enter both Chinese and English prompts, a Baidu spokesperson told CNN.

    SenseTime, which unveiled its service in April, has touted a range of features, which it says allow users to write or debug code more efficiently or receive personalized medical advice from a virtual health consultation assistant.

    [ad_2]

    Source link