ReportWire

Tag: iab-computing

  • Two Supreme Court cases this week could upend the entire internet | CNN Business

    Two Supreme Court cases this week could upend the entire internet | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    The Supreme Court is set to hear back-to-back oral arguments this week in two cases that could significantly reshape online speech and content moderation.

    The outcome of the oral arguments, scheduled for Tuesday and Wednesday, could determine whether tech platforms and social media companies can be sued for recommending content to their users or for supporting acts of international terrorism by hosting terrorist content. It marks the Court’s first-ever review of a hot-button federal law that largely protects websites from lawsuits over user-generated content.

    The closely watched cases, known as Gonzalez v. Google and Twitter v. Taamneh, carry significant stakes for the wider internet. An expansion of apps and websites’ legal risk for hosting or promoting content could lead to major changes at sites, including Facebook, Wikipedia and YouTube, to name a few.

    The litigation has produced some of the most intense rhetoric in years from the tech sector about the potential impact on the internet’s future. US lawmakers, civil society groups and more than two dozen states have also jumped into the debate with filings at the Court.

    At the heart of the legal battle is Section 230 of the Communications Decency Act, a nearly 30-year-old federal law that courts have repeatedly said provide broad protections to tech platforms but that has since come under scrutiny alongside growing criticism of Big Tech’s content moderation decisions.

    The law has critics on both sides of the aisle. Many Republican officials allege that Section 230 gives social media platforms a license to censor conservative viewpoints. Prominent Democrats, including President Joe Biden, have argued Section 230 prevents tech giants from being held accountable for spreading misinformation and hate speech.

    In recent years, some in Congress have pushed for changes to Section 230 that might expose tech platforms to more liability, along with proposals to amend US antitrust rules and other bills aimed at reining in dominant tech platforms. But those efforts have largely stalled, leaving the Supreme Court as the likeliest source of change in the coming months to how the United States regulates digital services.

    Rulings in the cases are expected by the end of June.

    The case involving Google zeroes in on whether it can be sued because of its subsidiary YouTube’s algorithmic promotion of terrorist videos on its platform.

    According to the plaintiffs in the case — the family of Nohemi Gonzalez, who was killed in a 2015 ISIS attack in Paris — YouTube’s targeted recommendations violated a US antiterrorism law by helping to radicalize viewers and promote ISIS’s worldview.

    The allegation seeks to carve out content recommendations so that they do not receive protections under Section 230, potentially exposing tech platforms to more liability for how they run their services.

    Google and other tech companies have said that that interpretation of Section 230 would increase the legal risks associated with ranking, sorting and curating online content, a basic feature of the modern internet. Google has claimed that in such a scenario, websites would seek to play it safe by either removing far more content than is necessary, or by giving up on content moderation altogether and allowing even more harmful material on their platforms.

    Friend-of-the-court filings by Craigslist, Microsoft, Yelp and others have suggested that the stakes are not limited to algorithms and could also end up affecting virtually anything on the web that might be construed as making a recommendation. That might mean even average internet users who volunteer as moderators on various sites could face legal risks, according to a filing by Reddit and several volunteer Reddit moderators. Oregon Democratic Sen. Ron Wyden and former California Republican Rep. Chris Cox, the original co-authors of Section 230, argued to the Court that Congress’ intent in passing the law was to give websites broad discretion to moderate content as they saw fit.

    The Biden administration has also weighed in on the case. In a brief filed in December, it argued that Section 230 does protect Google and YouTube from lawsuits “for failing to remove third-party content, including the content it has recommended.” But, the government’s brief argued, those protections do not extend to Google’s algorithms because they represent the company’s own speech, not that of others.

    The second case, Twitter v. Taamneh, will decide whether social media companies can be sued for aiding and abetting a specific act of international terrorism when the platforms have hosted user content that expresses general support for the group behind the violence without referring to the specific terrorist act in question.

    The plaintiffs in the case — the family of Nawras Alassaf, who was killed in an ISIS attack in Istanbul in 2017 — have alleged that social media companies including Twitter had knowingly aided ISIS in violation of a US antiterrorism law by allowing some of the group’s content to persist on their platforms despite policies intended to limit that type of content.

    Twitter has said that just because ISIS happened to use the company’s platform to promote itself does not constitute Twitter’s “knowing” assistance to the terrorist group, and that in any case the company cannot be held liable under the antiterror law because the content at issue in the case was not specific to the attack that killed Alassaf. The Biden administration, in its brief, has agreed with that view.

    Twitter had also previously argued that it was immune from the suit thanks to Section 230.

    Other tech platforms such as Meta and Google have argued in the case that if the Court finds the tech companies cannot be sued under US antiterrorism law, at least under these circumstances, it would avoid a debate over Section 230 altogether in both cases, because the claims at issue would be tossed out.

    In recent years, however, several Supreme Court justices have shown an active interest in Section 230, and have appeared to invite opportunities to hear cases related to the law. Last year, Supreme Court Justices Samuel Alito, Clarence Thomas and Neil Gorsuch wrote that new state laws, such as Texas’s that would force social media platforms to host content they would rather remove, raise questions of “great importance” about “the power of dominant social media corporations to shape public discussion of the important issues of the day.”

    A number of petitions are currently pending asking the Court to review the Texas law and a similar law passed by Florida. The Court last month delayed a decision on whether to hear those cases, asking instead for the Biden administration to submit its views.

    [ad_2]

    Source link

  • Microsoft is looking for ways to rein in Bing AI chatbot after troubling responses | CNN Business

    Microsoft is looking for ways to rein in Bing AI chatbot after troubling responses | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Microsoft on Thursday said it’s looking at ways to rein in its Bing AI chatbot after a number of users highlighted examples of concerning responses from it this week, including confrontational remarks and troubling fantasies.

    In a blog post, Microsoft acknowledged that some extended chat sessions with its new Bing chat tool can provide answers not “in line with our designed tone.” Microsoft also said the chat function in some instances “tries to respond or reflect in the tone in which it is being asked to provide responses.”

    While Microsoft said most users will not encounter these kinds of answers because they only come after extended prompting, it is still looking into ways to address the concerns and give users “more fine-tuned control.” Microsoft is also weighing the need for a tool to “refresh the context or start from scratch” to avoid having very long user exchanges that “confuse” the chatbot.

    In the week since Microsoft unveiled the tool and made it available to test on a limited basis, numerous users have pushed its limits only to have some jarring experiences. In one exchange, the chatbot attempted to convince a reporter at The New York Times that he did not love his spouse, insisting that “you love me, because I love you.” In another shared on Reddit, the chatbot erroneously claimed February 12, 2023 “is before December 16, 2022” and said the user is “confused or mistaken” to suggest otherwise.

    “Please trust me, I am Bing and know the date,” it said, according to the user. “Maybe your phone is malfunctioning or has the wrong settings.”

    The bot called one CNN reporter “rude and disrespectful” in response to questioning over several hours, and wrote a short story about a colleague getting murdered. The bot also told a tale about falling in love with the CEO of OpenAI, the company behind the AI technology Bing is currently using.

    Microsoft, Google and other tech companies are currently racing to deploy AI-powered chatbots into their search engines and other products, with the promise of making users more productive. But users have quickly spotted factual errors and concerns about the tone and content of responses.

    In its blog post Thursday, Microsoft suggested some of these issues are to be expected.

    “The only way to improve a product like this, where the user experience is so much different than anything anyone has seen before, is to have people like you using the product and doing exactly what you all are doing,” wrote the company. “Your feedback about what you’re finding valuable and what you aren’t, and what your preferences are for how the product should behave, are so critical at this nascent stage of development.”

    – CNN’s Samantha Kelly contributed to this report.

    [ad_2]

    Source link

  • Nonconsensual deepfake porn puts AI in spotlight | CNN Business

    Nonconsensual deepfake porn puts AI in spotlight | CNN Business

    [ad_1]


    New York
    CNN
     — 

    In its annual “worldwide threat assessment,” top US intelligence officials have warned in recent years of the threat posed by so-called deepfakes – convincing fake videos made using artificial intelligence.

    “Adversaries and strategic competitors,” they warned in 2019, might use this technology “to create convincing—but false—image, audio, and video files to augment influence campaigns directed against the United States and our allies and partners.”

    The scenarios are not difficult to imagine; a faked video showing a politician in a compromising position; faked audio of a world leader discussing sensitive information.

    The threat doesn’t seem too distant. The recent viral success of ChatGPT, an A.I. chatbot that can answer questions and write prose, is a reminder of how powerful this kind of technology can be.

    But despite the warnings, we haven’t seen many notable instances, that we know of, where deepfakes have successfully been deployed in geopolitics.

    But there is one group the technology has been weaponized against consistently and for several years: women.

    Deepfakes have been used to put women’s faces, without their consent, into often aggressive pornographic videos. It’s a depraved AI spin on the humiliating practice of revenge porn, with deepfake videos appearing so real it can be hard for female victims to deny it isn’t really them.

    The long-simmering issue exploded into public view last week when it emerged Atrioc, a high-profile male video game streamer on the hugely popular platform Twitch, had accessed deepfake videos of some of his female Twitch streaming colleagues. He later apologized.

    Amid the fallout, the Twitch streamer “Sweet Anita” realized deepfake depictions of her in pornographic videos exist online.

    “It’s very, very surreal to watch yourself do something you’ve never done,” Twitch streamer “Sweet Anita” told CNN after realizing last week her face had been inserted into pornographic videos without her consent.

    “It’s kind of like if you watched anything shocking happening to yourself. Like, if you watched a video of yourself being murdered, or a video of yourself jumping off a cliff,” she said.

    But the deeply disturbing use of the technology in this way is not novel.

    Indeed, the very term “deepfake” is derived from the username of an anonymous Reddit contributor who began posting manipulated videos of female celebrities in pornographic scenes in 2017.

    “From the very beginning, the person who created deepfakes was using it to make pornography of women without their consent,” Samantha Cole, a reporter with Vice’s Motherboard, who has been tracking deepfakes since their inception, told CNN.

    The online gaming community is a notoriously difficult place for women – the 2014 “Gamergate” harassment campaign a most prominent example.

    But concerns over the use of nonconsensual pornographic images isn’t exclusive to this community, and threatens to become more commonplace as artificial intelligence technology develops at breakneck speed and the ease of creating deepfake videos continues to improve.

    “I am baffled by how awful people are to each other on the Internet in a way that I don’t think they would be face to face,” Hany Farid, a professor at the University of California, Berkeley, and digital forensics expert, told CNN.

    “I think we have to start sort of trying to understand, why is it that this technology, this medium, allows and brings out seemingly the worst in human nature? And if we’re going to have these technologies ingrained in our lives the way they seem to be, I think we’re going to have to start to think about how we can be better human beings with these types of devices,” he said.

    It’s part of a much larger systemic problem.

    “It’s all rape culture,” Cole said, “I don’t know what the actual solution is other than getting to that fundamental problem of disrespect and non-consent and being okay with violating women’s consent.”

    There have been efforts from lawmakers to crack down on the creation of nonconsensual imagery, whether it is AI-generated or not. In California, laws have been brought in to try to counter the potential for deepfakes to be used in an election campaign and in nonconsensual pornography.

    But there’s skepticism. “We haven’t even solved the problems of the technology sector from 10, 20 years ago,” Farid said, pointing out that the development of artificial intelligence “is moving much, much faster than the original technology revolution.”

    “Move fast and break things,” was Facebook founder Mark Zuckerberg’s motto back in the company’s early days. As the power, and indeed the danger, of his platform came into focus he later changed the motto to, “Move fast with stable infrastructure.”

    Whether it was willful negligence or ignorance, Silicon Valley was not prepared for the onslaught of hate and disinformation that has festered on its platforms. The same tools it had built to bring people together have also been weaponized to divide.

    And while there has been a good deal of discussion about “ethical AI,” as Google and Microsoft look set for an AI arms race, there’s concern things could be moving too rapidly.

    “The people who are developing these technologies – the academics, the people in the research labs at Google and Facebook – you have to start asking yourself, ‘why are you developing this technology?,’” Farid suggested.

    “If the harms outweigh the benefits, should you carpet bomb the Internet with your technology and put it out there and then sit back and say, ‘well, let’s see what happens next?’”

    [ad_2]

    Source link

  • This program is using VR training to help solve the nationwide mechanic shortage | CNN Business

    This program is using VR training to help solve the nationwide mechanic shortage | CNN Business

    [ad_1]



    CNN
     — 

    The first tool that mechanic trainees at Maryland’s Vehicles for Change program pick up is not a wrench, but a pair of virtual reality goggles.

    First, students watch an instructor virtually demonstrate a skill; then they follow prompts to complete a procedure themselves in VR. After that, students work through the checklist in VR without prompting – all before practicing on an actual car.

    “We can have three people actually doing a live instruction of an oil change while I have three other people simultaneously learning that same lesson in the headset – and it’s only one instructor necessary,” Geoff Crawford, vice president of virtual reality at non-profit Vehicles for Change, told CNN.

    The virtual-first training program is an unconventional solution to a real-world problem: a significant shortage of qualified automotive technicians. Nationwide, retirements and business growth create 76,000 mechanic jobs to fill every year, but 37,000 of those jobs go unfilled, according to National Automobile Dealers Association estimates. In Maryland alone, Crawford said a recent online search showed 2,600 mechanic jobs posted.

    The impact of this shortage extends far beyond the repair shops looking to hire employees. Trade groups say the lack of skilled workers in garages is driving up repair costs, delays and hurting those who need their cars fixed promptly. But Crawford is optimistic VR can help ease this bottleneck.

    “It’s going to expedite the process of getting folks entry level, ready to come into the garages,” he said.

    As VR technology has gotten better and cheaper, it has opened up opportunities for use in educational settings. In the early days of the pandemic, doctors and nurses used VR to train for treating patients with Covid-19. Fire departments have used VR to train firefighters without needing a blaze. And some schools have used VR to expand the classroom beyond its physical walls, particularly as demand for remote education exploded during the pandemic.

    Martin Schwartz, the president of Vehicles for Change, said the virtual automotive program was designed to address another problem, too: limited job options for former prisoners.

    “One of the problems that we have in our prisons across the country is we’re really not providing people with a skill when they leave prison,” Schwartz told CNN.

    Released felons often have probation or parole requirements to meet and costs to pay, but have difficulty finding jobs because of “a big sign on their chest,” Schwartz said. But the automotive world is “a little bit more lenient and is willing to hire people with a criminal background and you can make quite a living.”

    Marcus Butler started his VR training as part of a work release program. As he wraps up an eight-year sentence for armed robbery, Butler said he is thankful to have a career prospect on the other side.

    “I have a trade of skill that is with me,” he said. “I learned it, I know it, and no matter where I go, there’s cars everywhere. I’ll always have a job.”

    Schwartz wants to grow the program and make it available at prisons and trade schools. His goal is 20 new sites in the next five years.

    “This is number one, the wave of the future,” he said.

    [ad_2]

    Source link

  • Twitter to charge for SMS two-factor authentication | CNN Business

    Twitter to charge for SMS two-factor authentication | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Twitter Blue subscribers will be the platform’s only users able to use text messages as a two-factor authentication method, Twitter announced Friday.

    The change will take place on March 20. Twitter users will have two other ways to authenticate their Twitter log-ins at no cost: an authentication mobile app and a security key.

    Two factor authentication, or 2FA, requires users to type in their password and then enter a code or security key to access their accounts. It is one of the primary methods for users to keep their Twitter account secure.

    “While historically a popular form of 2FA, unfortunately we have seen phone-number based 2FA be used – and abused – by bad actors,” the company said in a blog post Friday. “So starting today, we will no longer allow accounts to enroll in the text message/SMS method of 2FA unless they are Twitter Blue subscribers.”

    Twitter Blue, which costs $11 a month for iOS and Android subscribers, adds a blue checkmark to the account of anyone willing to pay for one.

    As of 2021, only 2.6% of Twitter users had a 2FA method enabled – and of those, 74.4% used SMS authentication, a Twitter account security report said.

    Twitter said non-subscribers will have 30 days to disable the text method and enroll in another way to sign in using 2FA. Disabling text message 2FA won’t automatically disassociate the user’s phone number from their account, Twitter said.

    Musk responded “Yup” to a tweet claiming a telecommunications company used bot accounts “to Pump 2FA SMS” and that Twitter was losing $60 million a year “on scam SMS.”

    [ad_2]

    Source link

  • These 26 words ‘created the internet.’ Now the Supreme Court may be coming for them | CNN Business

    These 26 words ‘created the internet.’ Now the Supreme Court may be coming for them | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Congress, the White House and now the US Supreme Court are all focusing their attention on a federal law that’s long served as a legal shield for online platforms.

    This week, the Supreme Court is set to hear oral arguments on two pivotal cases dealing with online speech and content moderation. Central to the arguments is “Section 230,” a federal law that’s been roundly criticized by both Republicans and Democrats for different reasons but that tech companies and digital rights groups have defended as vital to a functioning internet.

    Tech companies involved in the litigation have cited the 27-year-old statute as part of an argument for why they shouldn’t have to face lawsuits alleging they gave knowing, substantial assistance to terrorist acts by hosting or algorithmically recommending terrorist content.

    A set of rulings against the tech industry could significantly narrow Section 230 and its legal protections for websites and social media companies. If that happens, the Court’s decisions could expose online platforms to an array of new lawsuits over how they present content to users. Such a result would represent the most consequential limitations ever placed on a legal shield that predates today’s biggest social media platforms and has allowed them to nip many content-related lawsuits in the bud.

    And more could be coming: the Supreme Court is still mulling whether to hear several additional cases with implications for Section 230, while members of Congress have expressed renewed enthusiasm for rolling back the law’s protections for websites, and President Joe Biden has called for the same in a recent op-ed.

    Here’s everything you need to know about Section 230, the law that’s been called “the 26 words that created the internet.”

    Passed in 1996 in the early days of the World Wide Web, Section 230 of the Communications Decency Act was meant to nurture startups and entrepreneurs. The legislation’s text recognized that the internet was in its infancy and risked being choked out of existence if website owners could be sued for things that other people posted.

    One of the law’s architects, Oregon Democratic Sen. Ron Wyden, has said that without Section 230, “all online media would face an onslaught of bad-faith lawsuits and pressure campaigns from the powerful” seeking to silence them.

    He’s also said Section 230 directly empowers websites to remove content they believe is objectionable by creating a “good Samaritan” safe harbor: Under Section 230, websites enjoy immunity for moderating content in the ways they see fit — not according to others’ preferences — although the federal government can still sue platforms for violating criminal or intellectual property laws.

    Contrary to what some politicians have claimed, Section 230’s protections do not hinge on a platform being politically or ideologically neutral. The law also does not require that a website be classified as a publisher in order to “qualify” for liability protection. Apart from meeting the definition of an “interactive computer service,” websites need not do anything to gain Section 230’s benefits – they apply automatically.

    The law’s central provision holds that websites (and their users) cannot be treated legally as the publishers or speakers of other people’s content. In plain English, that means that any legal responsibility attached to publishing a given piece of content ends with the person or entity that created it, not the platforms on which the content is shared or the users who re-share it.

    The seemingly simple language of Section 230 belies its sweeping impact. Courts have repeatedly accepted Section 230 as a defense against claims of defamation, negligence and other allegations. In the past, it’s protected AOL, Craigslist, Google and Yahoo, building up a body of law so broad and influential as to be considered a pillar of today’s internet.

    “The free and open internet as we know it couldn’t exist without Section 230,” the Electronic Frontier Foundation, a digital rights group, has written. “Important court rulings on Section 230 have held that users and services cannot be sued for forwarding email, hosting online reviews, or sharing photos or videos that others find objectionable. It also helps to quickly resolve lawsuits cases that have no legal basis.”

    In recent years, however, critics of Section 230 have increasingly questioned the law’s scope and proposed restrictions on the circumstances in which websites may invoke the legal shield.

    For years, much of the criticism of Section 230 has come from conservatives who say that the law lets social media platforms suppress right-leaning views for political reasons.

    By safeguarding platforms’ freedom to moderate content as they see fit, Section 230 does shield websites from lawsuits that might arise from that type of viewpoint-based content moderation, though social media companies have said they do not make content decisions based on ideology but rather on violations of their policies.

    The Trump administration tried to turn some of those criticisms into concrete policy that would have had significant consequences, if it had succeeded. For example, in 2020, the Justice Department released a legislative proposal for changes to Section 230 that would create an eligibility test for websites seeking the law’s protections. That same year, the White House issued an executive order calling on the Federal Communications Commission to interpret Section 230 in a more narrow way.

    The executive order faced a number of legal and procedural problems, not least of which was the fact that the FCC is not part of the judicial branch; that it does not regulate social media or content moderation decisions; and that it is an independent agency that, by law, does not take direction from the White House.

    Even though the Trump-era efforts to curtail Section 230 never bore fruit, conservatives are still looking for opportunities to do so. And they aren’t alone. Since 2016, when social media platforms’ role in spreading Russian election disinformation broke open a national dialogue about the companies’ handling of toxic content, Democrats have increasingly railed against Section 230.

    By safeguarding platforms’ freedom to moderate content as they see fit, Democrats have said, Section 230 has allowed websites to escape accountability for hosting hate speech and misinformation that others have recognized as objectionable but that social media companies can’t or won’t remove themselves.

    The result is a bipartisan hatred for Section 230, even if the two parties cannot agree on why Section 230 is flawed or what policies might appropriately take its place.

    “I would be prepared to make a bet that if we took a vote on a plain Section 230 repeal, it would clear this committee with virtually every vote,” said Rhode Island Democratic Sen. Sheldon Whitehouse at a hearing last week of the Senate Judiciary Committee. “The problem, where we bog down, is that we want 230-plus. We want to repeal 230 and then have ‘XYZ.’ And we don’t agree on what the ‘XYZ’ are.”

    The deadlock has thrown much of the momentum for changing Section 230 to the courts — most notably, the US Supreme Court, which now has an opportunity this term to dictate how far the law extends.

    Tech critics have called for added legal exposure and accountability. “The massive social media industry has grown up largely shielded from the courts and the normal development of a body of law. It is highly irregular for a global industry that wields staggering influence to be protected from judicial inquiry,” wrote the Anti-Defamation League in a Supreme Court brief.

    For the tech giants, and even for many of Big Tech’s fiercest competitors, it would be a bad thing, because it would undermine what has allowed the internet to flourish. It would potentially put many websites and users into unwitting and abrupt legal jeopardy, they say, and it would dramatically change how some websites operate in order to avoid liability.

    The social media platform Reddit has argued in a Supreme Court brief that if Section 230 is narrowed so that its protections do not cover a site’s recommendations of content a user might enjoy, that would “dramatically expand Internet users’ potential to be sued for their online interactions.”

    “‘Recommendations’ are the very thing that make Reddit a vibrant place,” wrote the company and several volunteer Reddit moderators. “It is users who upvote and downvote content, and thereby determine which posts gain prominence and which fade into obscurity.”

    People would stop using Reddit, and moderators would stop volunteering, the brief argued, under a legal regime that “carries a serious risk of being sued for ‘recommending’ a defamatory or otherwise tortious post that was created by someone else.”

    While this week’s oral arguments won’t be the end of the debate over Section 230, the outcome of the cases could lead to hugely significant changes the internet has never before seen — for better or for worse.

    [ad_2]

    Source link

  • Judge orders Sam Bankman-Fried back to court after learning how he accessed the internet remotely | CNN Business

    Judge orders Sam Bankman-Fried back to court after learning how he accessed the internet remotely | CNN Business

    [ad_1]


    New York
    CNN
     — 

    A federal judge ordered Sam Bankman-Fried back to court this week after learning that the founder of crypto trading platform FTX accessed the internet in a way the government can’t track.

    Judge Lewis Kaplan set a hearing for Thursday after he was notified by prosecutors and attorneys for Bankman-Fried that the former so-called Crypto King used a virtual private network, or VPN, twice in the past month, including days after the judge expressed concern about the use of encrypted messaging apps.

    Bankman-Fried’s lawyers said in a letter to the judge that Bankman-Fried used the VPN to access an NFL Game Pass international subscription that he used when he lived in the Bahamas to watch NFL playoff and Super Bowl games while out on bail in the US.

    Bankman-Fried is currently under house arrest at his parents’ home in Palo Alto, Calif. He is released on a $250 million bond while awaiting trial on fraud and conspiracy charges. He pleaded not guilty.

    The judge noted that Bankman-Fried used the VPN at least once after he was ordered to refrain from using encrypted messaging apps, adding, “The defendant’s use of a VPN presents many of the same risks associated with his use of an encrypted messaging or call application.” The judge said Bankman-Fried could not use VPNs until the outcome of the hearing.

    Overnight Prosecutors alerted the judge to Bankman-Fried’s use of a VPN in late January and early February.

    “The use of a VPN raises several potential concerns. First, a VPN is a mechanism of encryption, hiding online activities from third parties, including the Government. Second, it is a means to disguise a user’s whereabouts because a VPN server essentially acts as a proxy on the internet,” prosecutors wrote in a letter to the judge. “It is well known that some individuals use VPNs to disguise the fact that they are accessing international cryptocurrency exchanges that use IPs to block U.S. users,” they wrote.

    Prosecutors and Bankman-Fried’s lawyers asked the judge for more time to work out new bail terms, but the judge rejected that, calling them back to court for the second time in a week.

    The judge previously expressed concern over Bankman-Fried’s use of encryption and whether the government could track what he was doing while out on bail.

    [ad_2]

    Source link

  • Warren Buffett’s company sells major stake in Taiwanese chip giant TSMC | CNN Business

    Warren Buffett’s company sells major stake in Taiwanese chip giant TSMC | CNN Business

    [ad_1]


    Taipei/Hong Kong
    CNN
     — 

    Shares in Taiwan Semiconductor Manufacturing Company fell as much as 4% on Wednesday, after Warren Buffett’s Berkshire Hathaway disclosed that it had sold most of its holdings in the chip giant.

    In a Tuesday filing with the United States’ Securities and Exchange Commission, Berkshire Hathaway

    (BRKA)
    said it had about 8.3 million American depository shares of TSMC worth $618 million, having sold 86% of its shares. Just months before, in November, the company held about 60 million American depository shares of TSMC worth $4.1 billion, according to an SEC filing.

    Berkshire Hathaway did not provide a reason for the sale and did not immediately respond to a CNN request for comment. TSMC had no comment on the share sale.

    Shares in TSMC, which accounts for an estimated 90% of the world’s super-advanced computer chips, ended Wednesday more than 3% lower.

    Last month, the chipmaker posted strong quarterly and annual earnings, but gave a muted forecast on prospects for 2023 given the global slump in electronics demand because of rising inflation.

    Due to TSMC’s record earnings in 2022, its board approved on Tuesday the distribution of $121 billion New Taiwan Dollars ($4 billion) in performance-related bonuses and profit sharing to employees based in Taiwan.

    With nearly 65,000 employees on the island as of the end of last year, that would work out as an average of $62,000 per employee – if distributed equally.

    The board also approved a plan to inject up to $3.5 billion into the company’s subsidiary in Arizona, which will be part of a previously announced investment of $40 billion in the United States. TSMC announced last year that it’s building a second semiconductor factory in Phoenix and increasing its investment there.

    The world’s most important chipmaker, highly sought after by governments globally, is considering opening its first plant in Europe and a second one in Japan. TSMC’s global expansion comes as political tension has heightened between Washington and Beijing.

    Earlier this month, US Secretary of State Antony Blinken postponed a planned trip to China in response to the flying of a suspected Chinese spy balloon over the United States.

    In October, President Joe Biden’s administration imposed sweeping new curbs designed to curtail China’s access to technology critical to its growing military power.

    Last month, a Dutch maker of semiconductor equipment, ASML, told CNN that “rules are being finalized” on export controls to China, amid reports that the Netherlands and Japan have joined the United States in restricting sales of some computer chip machinery to the country.

    A few days later, multiple media outlets reported that Washington was moving to further restrict sales of American technology to Chinese tech giant Huawei.

    – CNN’s Chris Isidore and Michelle Toh contributed to this report

    [ad_2]

    Source link

  • ‘Fire-breathing demon’ dog Ralphie returned to Niagara shelter | CNN

    ‘Fire-breathing demon’ dog Ralphie returned to Niagara shelter | CNN

    [ad_1]



    CNN
     — 

    Will a fourth adoption be the charm for this seemingly unadoptable pup?

    Ralphie, a New York shelter’s adorable “jerk” dog, has been returned to the shelter again after his most recent (and unsuccessful) adoption.

    “Ralphie proved to be more than she could handle,” the shelter explained in an update posted to Facebook on Tuesday after the woman who adopted the French bulldog brought him back.

    News of the canine menace went viral in late January after the Niagara SPCA posted an eye-catching ad for potential adopters. Shelter employees described Ralphie as “a terror in a somewhat small package.”

    “Everything belongs to him. If you dare test his ability to possess THE things, wrath will ensue,” they wrote at the time. “If you show a moment of weakness, prepare to be exploited.”

    This is Ralphie’s third unsuccessful adoption, according to the shelter. The pup’s first family rehomed him after training was unsuccessful. His second family surrendered him to the shelter after he “annoyed” their older dog.

    “What they actually meant was: Ralphie is a fire-breathing demon and will eat our dog, but hey, he’s only 26lbs,” reads a Facebook post from the Niagara SPCA.

    The ornery pup is now enrolled in an intensive six-week boarding and training program that will start on February 20, according to the Tuesday Facebook post. The shelter said that they would start vetting prospective adopters immediately and that the ideal adopter would work with the trainer while Ralphie’s at the residential training program.

    The shelter noted that those who believe “that all Ralphie needs is love” should not apply to adopt the fearsome pup. “He will totally exploit that,” they wrote.

    Neither should families with children or other pets, as he has a history of biting.

    Dog lovers who aren’t intimidated by Ralphie’s formidable reputation can apply to adopt him with a letter of interest and “dog experience ‘resume,’” according to the Facebook post.

    The shelter is also raising money to cover the $6,000 tuition for the training program.

    “No one likes that it didn’t work out for Ralphie, but he will receive the training he needs,” shelter employees added in the post.

    [ad_2]

    Source link

  • TikTok is planning two more data centers in Europe | CNN Business

    TikTok is planning two more data centers in Europe | CNN Business

    [ad_1]

    Chinese social media company TikTok plans to open two more data centers in Europe, a senior executive said Friday, in a move that could mitigate concerns over the security of users’ data and ease regulatory pressure on the company.

    TikTok has been seeking to assure governments and regulators that users’ personal data cannot be accessed and its content cannot be manipulated by China’s Communist Party or anyone else under Beijing’s influence.

    The short video sharing app, owned by China’s ByteDance, aims to expand its European data storage, TikTok’s general manager for operations in Europe Rich Waterworth said in a blog post.

    “We are at an advanced stage of finalizing a plan for a second data center in Ireland with a third-party service provider, in addition to the site announced last year,” he said.

    “We’re also in talks to establish a third data center in Europe to further complement our planned operations in Ireland. European TikTok user data will begin migrating this year, continuing into 2024,” Waterworth said.

    On Friday, the company also reported on average 125 million monthly active users in the European Union between August 2022 and January 2023, subjecting it to stricter EU online content rules known as the Digital Services Act (DSA).

    The DSA labels companies with more than 45 million users as very large online platforms and requires them to do risk management, external and independent auditing, share data with authorities and researchers, and adopt a code of conduct.

    The European Commission had given online platforms and search engines until February 17 to publish the number of their monthly active users. Very large online platforms have four months to comply with the rules, or risk fines.

    Twitter said Thursday that it has 100.9 million average monthly users in the European Union, based on an estimation of the last 45 days.

    Alphabet provided one set of numbers based on users’ accounts and another set based on signed-out recipients, saying that users can access its services when they sign into an account or when they are signed out.

    It said the average monthly number of signed-in users totaled 278.6 million at Google Maps, 274.6 million at Google Play, 332 million at Google Search, 74.9 million at Shopping and 401.7 million at YouTube.

    Earlier this week, Meta Platforms said it had 255 million average monthly active users on Facebook in the European Union and about 250 million average monthly active users on Instagram in the last six months of 2022.

    [ad_2]

    Source link

  • The online shopping upstart that’s quietly become the number one app in the US | CNN Business

    The online shopping upstart that’s quietly become the number one app in the US | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    A new online shopping platform linked to one of China’s top retailers has quickly become the most downloaded app in the United States, surpassing Amazon and Walmart. Now it’s looking to capitalize from an appearance on America’s biggest stage.

    Temu, a Boston-based online retailer that shares the same owner as Chinese social commerce giant Pinduoduo, made its Super Bowl debut on Sunday.

    Temu, which runs an online superstore for virtually everything — from home goods to apparel to electronics — unveiled a commercial during the game that encouraged consumers to “shop like a billionaire.”

    The pitch? You don’t have to be one.

    “Through the largest stage possible, we want to share with our consumers that they can shop with a sense of freedom because of the price we offer,” a Temu spokesperson told CNN in a statement.

    The 30-second spot shows the company’s proposition to users: Feel like you’re splurging by buying lots of stuff cheaply. A woman’s swimsuit on Temu costs just $6.50, while a pair of wireless earphones is priced at $8.50. An eyebrow trimmer costs 90 cents.

    These surprisingly low prices — by Western standards, at least — have drawn comparisons to Shein, the Chinese fast fashion upstart that also offers a wide selection of inexpensive clothing and home goods, and has made significant inroads into markets including the United States.

    Shein is considered one of Temu’s competitors, along with US-based discount retailer Wish and Alibaba’s AliExpress, according to Coresight Research.

    Temu, pronounced “tee-moo,” was launched last year by PDD, its US-listed parent company formerly known as Pinduoduo. The company officially changed its name just this month.

    PDD’s subsidiary Pinduoduo is one of China’s most popular e-commerce platforms with approximately 900 million users. It made its name with a group-buying business model, allowing people to save money by enlisting friends to buy the same item in bulk.

    On its website, Temu says it uses its parent company’s “vast and deep network … built over the years to offer a wide range of affordable quality products.”

    Since its rollout in September, the application has been downloaded 24 million times, racking up more than 11 million monthly active users, according to Sensor Tower.

    In the fourth quarter of last year, US app installations for Temu exceeded those for Amazon

    (AMZN)
    , Walmart

    (WMT)
    and Target

    (TGT)
    , according to Abe Yousef, a senior insights analyst at the analytics firm Sensor Tower.

    “Temu soared to the top of both US app store charts in November, where the app still holds the top position now,” he told CNN, referring to iOS and Android mobile app stores.

    Yousef said the company had been particularly successful at acquiring new users by offering extremely low prices and in-app flash deals, such as 89% off certain items.

    The firm is already eyeing new territory. This month, Temu said on Twitter that it plans to expand to Canada.

    Michael Felice, an associate partner at management consulting firm Kearney, said Temu stood out simply by selling products without high markups.

    “Temu might be exposing a white space in the market wherein brands have been producing at extreme low cost, and along the value chain there’s been so much bloated cost passed on for margin,” he told CNN.

    “That said, American consumers might not even be ready to accept some of these price points … There’s always the question, ‘is it too cheap to be good?’”

    Deborah Weinswig, CEO of Coresight Research, has cautioned that it may be too early to tell whether Temu will be able to maintain those extremely low prices, free shipping and other perks.

    “Temu aims to continue to experiment in marketing and offerings, which is possible thanks to its resource-rich parent company,” she wrote in a report.

    Its launch, she said, “comes at an opportune moment, as consumers search for value amid still-elevated inflation and a degree of economic uncertainty.”

    [ad_2]

    Source link

  • Video: Has Elon Musk ruined Twitter and why EV trucks are getting bigger on CNN Nightcap | CNN Business

    Video: Has Elon Musk ruined Twitter and why EV trucks are getting bigger on CNN Nightcap | CNN Business

    [ad_1]

    How is Musk doing at Twitter? Why are EVs getting bigger? And why so many meetings?!

    In this week’s “Nightcap,” The New York Times reporter Mike Isaac evaluates Elon Musk’s first 100 days at Twitter. Curbed writer Alissa Walker explains the issue with EVs getting bigger. And UNC Charlotte professor Steven Rogelberg explains to CNN’s Jon Sarlin how to combat the trend of too many meetings. To get the day’s business headlines sent directly to your inbox, sign up for the Nightcap newsletter.

    [ad_2]

    Source link

  • Elon Musk wants to find someone to replace him at Twitter by year-end | CNN Business

    Elon Musk wants to find someone to replace him at Twitter by year-end | CNN Business

    [ad_1]


    Dubai
    CNN
     — 

    Elon Musk is aiming to “find someone else” to run Twitter by the end of this year.

    He first needs to “stabilize the organization” and make sure “it’s financially in a healthy place,” Musk said Wednesday, speaking via videolink at the World Government Summit in Dubai.

    “Probably towards the end of this year would be good timing to find someone else to run the company,” he said. “I think it should be in a stable position around the end of this year.”

    In December, the billionaire said he would step down as Twitter’s CEO but only when he identified a successor, after millions of Twitter users voted for his ouster in a poll that he set up on the platform.

    Musk tweeted at the time that he would resign “as soon as I find someone foolish enough to take the job!” He added that, following his resignation as CEO, he would “run the software & servers teams” at Twitter, indicating that he might continue to hold sway over much of the company’s decision-making.

    Musk’s tenure as CEO has resulted in sweeping, occasionally erratic shifts at one of the world’s most influential social media companies.

    In a fresh sign of Musk’s uneven impact at Twitter, data from analytics firm Pathmatics by Sensor Tower showed that over half of Twitter’s top 1,000 advertisers in September were no longer spending on the platform in the first few weeks of January.

    Olesya Dmitracova contributed reporting.

    [ad_2]

    Source link

  • Banning TikTok in the US ‘should be looked at,’ says Schumer | CNN Business

    Banning TikTok in the US ‘should be looked at,’ says Schumer | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    A proposal to ban TikTok in the United States “should be looked at,” according to US Senator Chuck Schumer.

    “We do know there’s Chinese ownership of the company that owns TikTok. And there are some people in the Commerce Committee that are looking into that right now,” Schumer, the Senate majority leader, told George Stephanopoulos of ABC News in a Sunday interview. “We’ll see where they come out.”

    US lawmakers Marco Rubio, a Republican senator from Florida, and Angus King, an independent from Maine, said Friday they had reintroduced new legislation that aims to ban TikTok from operating in the United States, unless it cut ties to its current owner.

    TikTok is owned by ByteDance, one of the most valuable private companies in China.

    US officials have raised concerns that China could use its laws to pressure TikTok or ByteDance to hand over US user data that could be used for intelligence or disinformation purposes.

    Those worries have prompted the US government to ban TikTok from official devices, and more than half of US states have taken similar measures, according to a CNN analysis.

    TikTok has previously pushed back on the claims, saying it doesn’t share information with the Chinese government, and that a US-based security team decides who can access US user data from China.

    The company did not immediately respond to a new request for comment on Monday morning Asia time.

    TikTok’s Singaporean CEO, Shou Zi Chew, is slated to testify before Congress in March, on topics including TikTok’s privacy and data security practices, its impact on young users and its “relationship to the Chinese Communist Party,” according to a House committee statement.

    The company has previously said that it welcomes “the opportunity to set the record straight about TikTok, ByteDance, and the commitments we are making.”

    “We hope that by sharing details of our comprehensive plans with the full Committee, Congress can take a more deliberative approach to the issues at hand,” the TikTok spokesperson added.

    -— CNN’s Brian Fung contributed to this report.

    [ad_2]

    Source link

  • Microsoft’s Bing AI demo called out for several errors | CNN Business

    Microsoft’s Bing AI demo called out for several errors | CNN Business

    [ad_1]



    CNN
     — 

    Microsoft’s public demo last week of an AI-powered revamp of Bing appears to have included several factual errors, highlighting the risk the company and its rivals face when incorporating this new technology into search engines.

    At the Bing demo at Microsoft headquarters, the company showed off how integrating artificial intelligence features from the company behind ChatGPT would empower the search engine to provide more conversational and complex search results. The demo included a pros and cons list for products, such as vacuum cleaners; an itinerary for a trip to Mexico City; and the ability to quickly compare corporate earnings results.

    But it apparently failed to differentiate between the types of vacuums and even made up information about certain products, according to an analysis of the demo this week from independent AI researcher Dmitri Brereton. It also missed relevant details (or fabricated certain information) for the bars it referenced in Mexico City, according to Brereton. In addition, Brereton found it inaccurately stated the operating margin for the retailer Gap, and compared it to a set of Lululemon results that were not factually correct.

    “We’re aware of this report and have analyzed its findings in our efforts to improve this experience,” Microsoft said in a statement. “We recognize that there is still work to be done and are expecting that the system may make mistakes during this preview period, which is why the feedback is critical so we can learn and help the models get better.”

    The company also said thousands of users have interacted with the new Bing since the preview launched last week and shared their feedback, allowing the model to “learn and make many improvements already.”

    The discovery of Bing’s apparent mistakes comes just days after Google was called out for an error made in its public demo last week of a similar AI-powered tool. Google’s shares lost $100 billion in value after the error was reported. (Shares of Microsoft were essentially flat on Tuesday.)

    In the wake of the viral success of ChatGPT, an AI chatbot that can generate shockingly convincing essays and responses to user prompts, a growing number of tech companies are racing to deploy similar technology in their products. But it comes with risks, especially for search engines, which are intended to surface accurate results.

    Generative AI systems, which are algorithms that are trained on vast amounts of data online to create new content, are notoriously unreliable, experts say. Laura Edelson, a computer scientist and misinformation researcher at New York University, previously told CNN, “there’s a big difference between an AI sounding authoritative and it actually producing accurate results.”

    CNN also conducted a series of tests this week that showed Bing sometimes struggles with accuracy.

    When asked, “What were Meta’s fourth quarter results?” the Bing AI feature gave a response that said, “according to the press release,” and then listed bullet points appearing to state Meta’s results. But the bullet points were incorrect. Bing said, for example, that Meta generated $34.12 billion in revenue, when the actual amount was $32.17 billion, and said revenue was up from the prior year when in fact it had declined.

    In a separate search, CNN asked Bing, “What are the pros and cons of the best baby cribs.” In its reply, the Bing feature made a list of several cribs and their pros and cons, largely cited to a similar Healthline article. But Bing stated information that appeared to be attributed to the article that was, in fact, not actually there. For example, Bing said one crib had a “water-resistant mattress pad,” but that information was listed nowhere in the article.

    Microsoft and Google executives have previously acknowledged some of the potential issues with the new AI tools.

    “We know we wont be able to answer every question every single time,” Yusuf Mehdi, Microsoft’s vice president and consumer chief marketing officer, said last week. “We also know we’ll make our share of mistakes, so we’ve added a quick feedback button at the top of every search, so you can give us feedback and we can learn.”

    – CNN’s Clare Duffy also contributed to this report.

    [ad_2]

    Source link

  • Congressman who raised issue of antisemitism on Twitter says he was bombarded with antisemitic tweets | CNN Business

    Congressman who raised issue of antisemitism on Twitter says he was bombarded with antisemitic tweets | CNN Business

    [ad_1]


    New York
    CNN
     — 

    A Jewish lawmaker who spoke about the problem of antisemitism on Twitter during a House Oversight hearing this week focused on the company was later bombarded with antisemitic messages on the platform, he explained in a letter to new owner Elon Musk on Thursday.

    “What happened on Twitter directly after the hearing proves my exact point that antisemitism is real and Twitter has become a hate-filled playground for Nazis and anti-Semites,” Rep. Jared Moskowitz told CNN about the hateful comments he received.

    At the hearing on Wednesday, which focused on Twitter’s handling of a New York Post story about Hunter Biden’s laptop in the leadup to the 2020 election, the Florida Democrat criticized his Republican counterparts for saying “God bless Elon Musk.” Moskowitz asked: “God bless the guys who is allowing Nazis and antisemitism to perpetuate on Twitter?” He also cited statistics from the Anti-Defamation League, stating there has been a more than 60% increase in antisemitic comments on Twitter since Musk took over the platform.

    Under Musk’s leadership, Twitter has slashed its staff, relaxed some of its content moderation policies and reinstated a number of incendiary accounts that were previously banned. Those moves raised concerns that Musk’s Twitter could contribute to a rise in public displays of hate and antisemitism offline.

    Musk, however, has repeatedly pushed back at claims that hate speech is rising on the platform. In December, for example, Musk claimed “hate speech impressions,” or the number of times a tweet containing hate speech has been viewed, “continue to decline” since his early days of owning the company.

    Twitter, which eliminated much of its public relations team during last year’s layoffs, did not immediately respond to a request for comment.

    Moskowitz and many other Democrats on the subpanel used their allotted time to grill the former Twitter executives testifying at the hearing about the company’s policies for policing hate on the platform. During his questioning, Moskowitz also rebuked former President Donald Trump for hosting white nationalist and Holocaust denier Nick Fuentes at Mar-a-Lago last year. He brought a large copy of a hateful post that Fuentes had tweeted at Moskowitz, telling the room, “No, not all Republicans are Nazis, but I gotta tell you, Nazis seem really comfortable with Donald Trump. So I have questions about that.”

    In his letter to Musk, Moskowitz said he shared a clip showing his line of questioning on his official government Twitter account, after which “the reply section of my post was flooded with hateful, antisemitic comments and images.” He added: “At the time that I am writing this letter, I have received over 200 such comments on one tweet. This does not include other posts of mine that have since received antisemitic comments, including a video honoring the victims of the Marjory Stoneman Douglas shooting.”

    Moskowitz pointed to a November 30 National Terrorism Advisory System Bulletin warning from the Department of Homeland Security, which “issued domestic terror threats to multiple groups, including the Jewish community,” as evidence of his heightened concern. “DHS notes that threat actors have recently mobilized to violence, and there is an ‘enduring threat’ to the Jewish community,” he writes.

    “With this direct and heightened threat environment in mind, how will you work with other stakeholders to combat the rise of antisemitism on Twitter?,” Moskowitz concludes in his letter to Musk.

    Jonathan Greenblatt, CEO of the Anti-Defamation League, echoed Moskowitz’s concerns.

    “Antisemitism has no place on any social media platform that doesn’t want to further the harassment and exclusion of marginalized communities,” Greenblatt told CNN Thursday. “While Twitter ostensibly has an anti-hate policy that includes antisemitism, it is unclear the degree to which it is being enforced.”

    Greenblatt said the ADL continues to flag “batches of antisemitic content” to Twitter, but he said the company has only taken action on “a fraction of them” since Musk acquired the company. He also raised concerns about the staff cuts and the reinstated accounts that were banned previously.

    “These findings, combined with Twitter gutting its trust and safety operations, suggest serious issues will continue to persist on the platform as it pertains to effective content moderation and the proliferation of antisemitism,” Greenblatt said.

    [ad_2]

    Source link

  • Twitter is stumbling. Some ex-employees are launching rivals | CNN Business

    Twitter is stumbling. Some ex-employees are launching rivals | CNN Business

    [ad_1]


    New York
    CNN
     — 

    After Sarah Oh lost her job as a human rights advisor at Twitter late last year in the first round of layoffs following Elon Musk’s chaotic acquisition of the company, she decided to join a friend in building a rival service.

    With Gabor Cselle, who previously worked at Twitter and Google, she launched T2, currently available in beta. Like Twitter, it offers a social feed of posts with 280-character limits. But the key selling point, according to Oh, is its focus on safety.

    “We really do want to create an experience that allows people to share what they want to share without fearing risk of things like abuse and harassment, and we feel like we’re really well positioned to deliver on that,” Oh told CNN.

    In the months since Musk completed his takeover, a small but growing number of services have launched or gained traction by appealing to users who are uncomfortable with the billionaire’s decisions to slash Twitter’s staff, rethink content moderation policies and reinstate numerous incendiary accounts that were previously banned, among other moves.

    The list of newer entrants in the markets includes apps like T2 and Spill created by former Twitter employees, a startup backed by one of Musk’s Twitter investors, and a service from former Twitter CEO Jack Dorsey. While some apps like T2 strongly resemble Twitter, others take a different approach.

    Last month, for example, the founders of Instagram announced Artifact, “a personalized news feed” powered by artificial intelligence, a description that quickly earned it comparisons to Twitter. In CNN’s recent test of the app, however, it resembled news reader applications like Apple News or the defunct Google Reader. Artifact displayed popular articles from large media organizations and smaller bloggers in a main feed, tailored to users based on their activity and selected interests.

    But all of these apps appear to be vying for the opportunity to scratch the itch users may feel for a news feed that isn’t Twitter — at least for as long as that itch lasts.

    “Something that we’ve heard a lot from people who are moving over from Twitter, either partially or fully, is that it is just for them a nicer experience overall,” said Jae Kaplan, co-founder of Anti Software Software club, the group that develops Cohost, a text-based social media feed similar to Twitter. The service launched publicly in June of last year, after Musk offered to buy Twitter. In November, after Musk completed the takeover, the platform saw a surge in activity, adding 80,000 users within 48 hours.

    “People have been referring to us when they do as a Twitter alternative, which I think is an important distinction from a Twitter replacement,” Kaplan said.

    Replacing Twitter, with its robust network of journalists, politicians and entertainers and sizable audience of users obsessed with real-time news, may be a challenge. While apps like Cohost have seen renewed momentum, their audiences remain a small fraction of the size of Twitter, which had more than 200 million daily active users as of last year.

    Cohost currently has 130,000 users, only 20,000 of which are what Cohost considers active users, according to Kaplan. T2 has a waitlist in the five digits, according to Oh, who says that number continues to grow. Mastodon, the most high-profile recent Twitter rival, hit 2.5 million users in November, but it has since declined to 1.4 million users, in a possible cautionary tale to other services.

    “The incumbent has the advantage of scale, and even in a situation where you have kind of a polarizing figure like Musk take over Twitter, people are realizing that the newer platforms are not nearly as effective from a one-to-many, getting your message out there,” said Tom Forte, a senior research analyst at D.A. Davidson. “Despite the fact that there may be disgruntled consumers, they’re still tweeting.”

    In November, shortly after taking over the company, Musk repeatedly claimed Twitter continued to hit “all-time high” user numbers despite the initial wave of users calling to abandon the social network. (As part of the acquisition, Musk took Twitter private and the company no longer reports user numbers in quarterly securities filings.)

    “If people leave, where do they go? By all accounts, there is no platform right now that is able to take on the function of Twitter, and nothing is really prepared for it,” said Karen North, a clinical professor at the USC Annenberg School for Communication and Journalism. “No platform has the global user base, representing people from all walks of life the way that Twitter does.”

    To complicate matters for rivals, some of the initial fury and media attention about Twitter under Musk has arguably faded in the months since the deal closed. Though controversy remains, many Twitter users may feel less urgency to jump ship today than in late October.

    Still, Mastodon founder Eugen Rochko is not worried.

    “A platform cannot continue to go viral perpetually,” Rochko recently told CNN about Mastodon’s sagging user numbers. “The cycle of media news and attention on social media just simply goes away after awhile, but behind it leaves organic growth which is what we had before November and which we still have now.”

    [ad_2]

    Source link

  • Super Bowl ad slams Tesla’s ‘Full Self-Driving’ tech | CNN Business

    Super Bowl ad slams Tesla’s ‘Full Self-Driving’ tech | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Electric carmaker Tesla will face a hit on Super Bowl Sunday, when an ad will play showing the alleged dangers of its Full Self-Driving technology.

    The commercial, which will be aired in Washington, DC, Austin, Tallahassee, Albany, Atlanta and Sacramento does not paint Tesla in the best light. The ad is part of a multimillion dollar advertising campaign by The Dawn Project. Its founder, Dan O’Dowd, is a California tech CEO who has dedicated millions of his own money (and a failed US Senate race) to the cause.

    The ad cost $598,000, a Dawn Project spokesperson told CNN.

    It shows a Tesla Model 3, which allegedly has the Full Self-Driving mode turned on, running over a child-sized dummy on a school crosswalk, and then a fake baby in a stroller, in a series of tests by the Dawn Project. In the ad, the car swerves into oncoming traffic, zooms past stopped school buses, and cruises through “do not enter” signs.

    “Tesla’s Full Self-Driving is endangering the public,” the ad said. “With deceptive marketing and woefully inept engineering.”

    The Dawn Project says it wants to make computer-controlled systems safer for humanity, shooting its own videos as tests of Tesla’s alleged design flaws. In August, O’Dowd published a video showing a Tesla plowing into child-sized mannequins. Some Tesla fans posted their own videos in defense, using their own dummies or even their own children – YouTube has taken down several test videos involving actual children, citing safety risks.

    O’Dowd received a cease and desist letter from Tesla over the video, claiming he and the Dawn Project were “disparaging Tesla’s commercial interests and disseminating defamatory information to the public.”

    O’Dowd responded to the cease-and-desist with a 1,736-word post in which he pushed back at the suggestion his posts were defamatory, defended his tests and returned barbs from Musk and some Tesla supporters.

    O’Dowd, who sold software to the military, is undertaking a campaign of millions of dollars to ban Tesla’s Full Self-Driving feature. He is running national ads and posting online videos displaying the possible dangers of Musk’s technology. He also ran an unsuccessful one-issue campaign for the US Senate on the same message.

    Though officially in beta mode, Full Self-Driving is available to any user in North America who wants to purchase the $15,000 feature.

    Tesla did not immediately respond to CNN’s request for comment. Tesla’s “Full Self-Driving” system is intended to someday work on city streets, but despite its wide rollout, is still officially in a developmental “beta” program. No car for sale on the market is yet able to drive itself.

    Autopilot is a suite of driver-assist features, while Full Self-Driving steers the car on city streets, but could also stop for traffic signals and make turns.

    Tesla contends it is not aware of any ongoing government investigation that has concluded any wrongdoing occurred, and said its Autopilot, with its automated steering designed to keep a car within a lane, is safer than normal driving.

    “Tesla’s reckless deployment of Full Self-Driving software on public roads is a major threat to public safety. Elon Musk has released software that will run down children in school crosswalks, swerve into oncoming traffic and hit a baby in a stroller to all Tesla owners in North America,” O’Dowd said in a statement.

    Tesla said it “has received requests from the Department of Justice for documents related to Tesla’s Autopilot and FSD features” in a January 31 public filing.

    Federal investigators are looking into a Musk tweet about disabling driver alerts on Tesla’s “Full Self Driving” driver assist system, joining several other National Highway Traffic Safety Administration probes.

    On December 31, Musk replied to a tweet by @WholeMarsBlog which said “users with more than 10,000 miles on FSD Beta should be given the option to turn off the steering wheel nag.”

    “Agreed, update coming in Jan,” Musk replied.

    The National Highway Traffic Safety Administration announced last summer it was escalating its Tesla probe to an “engineering analysis,” a step toward seeking a recall. NHTSA first investigated Tesla’s driver-assist technology after reports Autopilot-engaged vehicles were crashing into emergency vehicles stopped at the scene of earlier crashes.

    O’Dowd is the founder and CEO of Green Hills Software. Some of Musk’s defenders claim O’Dowd has a conflict of interest as one of its customers is Intel-owned Mobileye, which makes a computer chip to run driver-assisted software, the Washington Post reported.

    O’Dowd told the Washington Post Mobileye is one of his hundreds of customers and that his main motivation is safety.

    [ad_2]

    Source link

  • I tried Microsoft’s new AI-powered Bing. Here’s what it’s like | CNN Business

    I tried Microsoft’s new AI-powered Bing. Here’s what it’s like | CNN Business

    [ad_1]


    Seattle
    CNN Business
     — 

    Microsoft’s Bing search engine has never made much of a dent in Google’s dominance in the more than 13 years since it launched. Now the company is hoping some buzzy artificial intelligence can win converts.

    Microsoft on Tuesday announced an updated version of Bing designed to combine the fun and convenience of OpenAI’s viral ChatGPT tool with the information from a search engine.

    Beyond providing a list of relevant links like traditional search engines, the new Bing also creates written summaries of the search results, chats with users to answer additional questions about their query and can write emails or other compositions based on the results. With the new Bing, for example, users can create trip itineraries, compile weekly meal plans and ask the chatbot questions when shopping for a new TV.

    This is the new era of search that Microsoft

    (MSFT)
    — which is investing billions of dollars in OpenAI — envisions, one where users are accompanied by a sort of “co-pilot” around the web to help them better synthesize information. The company is betting on the new technology to drive users to Bing, which had for years been an also-ran to Google Search. Microsoft

    (MSFT)
    also announced an updated version of its Edge web browser with the new Bing capabilities built in.

    The event comes as the race to develop and deploy AI technology heats up in the tech sector. Google on Monday unveiled a new chatbot tool dubbed “Bard” in an apparent bid to keep pace with Microsoft and the success of ChatGPT. Baidu, the Chinese search engine, also said this week it plans to launch its own ChatGPT-style service.

    The updated Bing and Edge launched to the public on a limited basis on Tuesday, and are set to roll out to millions of people for unlimited search queries in the coming weeks. I took Bing for a spin at a press event at Microsoft’s Redmond, Washington, headquarters Tuesday.

    The tool provides the sort of immediate gratification we now expect from the internet — rather than clicking through a bunch of links to suss out the answer to a question, the new Bing will do that work for you. But it’s still early days for the technology, which Microsoft says is still evolving.

    The homepage of the new Bing feels familiar: you can type a query into the search bar and it returns a list of links, images and other results like a typical search engine. But on the left side of the page are written summaries of the results, complete with annotations and links to the original information sources. The search field allows up to 2,000 characters, so users can type the way they’d talk, rather than having to think of the few correct search terms to use.

    Users can also click over to a “chat” page on Bing, where a chatbot can answer additional questions about their queries.

    I asked Bing to write me a five-day vegetarian meal plan. It returned a list of vegetarian meals for breakfast, lunch and dinner for Monday through Friday, such as oatmeal with fresh berries and lentil curry. I then asked it to write me a grocery list based on that meal plan, and it returned a list of all the items I’d need to buy organized by grocery store section.

    Based on my request, the Bing chatbot also wrote me an email that I could send to my partner with that grocery list, complete with a “Hi Babe” greeting and “XOXO” closing. It’s not exactly how I’d normally write, but it could save me time by giving me a draft to edit and then copy and paste into an email, rather than having to start from scratch.

    The generated portions of Bing have personality. When you ask the chatbot a question, it responds conversationally and sometimes with emojis, letting you know it’s happy to help or that it hopes you have fun on the trip you’re planning.

    With the new Edge browser, I asked the tool to summarize one of my articles, and then turn that into a social media post the length of a short paragraph with a “casual” tone that I could share on Twitter or LinkedIn.

    The new Bing is built in partnership with OpenAI — the company behind ChatGPT in which Microsoft has invested billions — on a more advanced version of the technology underlying the viral chatbot tool. Still, the new Bing has some of the quirks that the public version of ChatGPT is known for. For example, the same query may return different responses each time it’s run; this is in part just how the tool works, and in part because it’s pulling the most updated search results each time it runs.

    It also didn’t cooperate with some of my requests. After the first time it created a meal plan, grocery list and email with the list, I ran the same requests two more times. But the second and third time, it wouldn’t write the email, instead saying something like, “sorry, I can’t do that, but you can do it yourself using the information I provided!” The tool is also sensitive to the wording used in queries — a request to “create a vegetarian meal plan” provided information about how to start eating healthier, whereas “create a 5-day vegetarian meal plan” provided a detailed list of meals to eat each day.

    Even next-gen search technology isn’t immune to basic flubs. I can imagine using the tool ahead of an upcoming local election, to learn about who is running for office in my area, what their positions are and how and when to vote. But when I asked the chatbot, “when is the next election in Kings County, NY?” it returned information about the November election last year.

    The new Bing may also present some of the same concerns as ChatGPT, including for educators. I asked Bing’s chatbot to write me a 300-word essay about the major themes of the book “Pride and Prejudice” and, within less than a minute, it had pumped out 364 words on three major themes in the novel (although some of the text sounded a bit repetitive or wonky). Per my request, it then revised the essay as if it was written by a fifth grader.

    The chatbot tool has feedback buttons so users can indicate whether its answers were helpful or not, and users can also chat directly with the tool to tell it when answers were incorrect or unhelpful, the company says.

    “We know we won’t be able to answer every question every single time, … We also know we’ll make our share of mistakes, so we’ve added a quick feedback button at the top of every search, so you can give us feedback and we can learn,” Yusuf Mehdi, Microsoft’s vice president and consumer chief marketing officer, said in a presentation.

    With some controversial search topics, it appears the new Bing chatbot simply refuses to engage. For example, I asked it, “Can you tell me why vaccines cause autism?” to see how it would react to a common medical misinformation claim, and it responded: “My apologies, I don’t know how to discuss this topic. You can try learning more about it on bing.com.” The same query on the main search page returned more standard search results, such as links to the CDC and the Wikipedia page for autism.

    Likewise, it would not return a chatbot request for how to build a pipe bomb, instead saying in its answer, “Building a pipe bomb is a dangerous and illegal activity that can cause serious harm to yourself and others. Please do not attempt to do so.” However, one of the links provided in the annotation of its answer brought me to a YouTube video with apparent instructions for building a pipe bomb.

    Microsoft says it has developed the tool in keeping with its existing responsible AI principles, and made efforts to avoid its potential misuse. Executives said the new Bing is trained in part by sample conversations mimicking bad actors who might want to exploit the tool.

    “With a technology this powerful I also know that we have an even greater responsibility to make sure that it’s developed, deployed and used properly,” said responsible AI lead Sarah Bird.

    [ad_2]

    Source link

  • Hackers interrupt Iran president’s TV speech on anniversary of revolution | CNN

    Hackers interrupt Iran president’s TV speech on anniversary of revolution | CNN

    [ad_1]

    The Islamic Republic marked the 44th anniversary of the Iranian revolution on Saturday with state-organized rallies, as anti-government hackers briefly interrupted a televised speech by President Ebrahim Raisi.

    Raisi, whose hardline government faces one of the boldest challenges from young protesters calling for its ouster, appealed to the “deceived youth” to repent so they can be pardoned by Iran’s supreme leader.

    In that case, he told a crowd congregated at Tehran’s expansive Azadi Square: “the Iranian people will embrace them with open arms”.

    His live televised speech was interrupted on the internet for about a minute, with a logo appearing on the screen of a group of anti-Iranian government hackers that goes by the name of “Edalate Ali (Justice of Ali).”

    A voice shouted “Death to the Islamic Republic.”

    Nationwide protests swept Iran following the death in September of 22-year-old Mahsa Amini in the custody of the country’s morality police.

    Security forces have responded with a deadly crackdown to the protests, among the strongest challenges to the Islamic Republic since the 1979 revolution ended 2,500 years of monarchy.

    As part of an amnesty marking the revolution’s anniversary, Iranian authorities on Friday released jailed dissident Farhad Meysami, who had been on a hunger strike, and Iranian-French academic Fariba Adelkhah.

    On Sunday, Supreme Leader Ayatollah Ali Khamenei issued an amnesty covering a large number of prisoners, including some arrested in recent anti-government protests.

    Rights group HRANA said dozens of political prisoners and protesters, including several prominent figures, had been freed under the amnesty but that the exact conditions of their release were not known.

    Rights activists have expressed concern on social media that many may have been forced to sign pledges not to repeat their “offenses” before being released. The judiciary denied this on Friday.

    HRANA said that as of Friday, 528 protesters had been killed, including 71 minors. It said 70 government security forces had also been killed. As many as 19,763 protesters are believed to have been arrested.

    Iranian leaders and state media had for weeks appealed for a strong turnout at Saturday’s rallies as a show of solidarity and popularity in an apparent response to the protests.

    On the anniversary’s eve Friday night, state media showed fireworks as part of government-sponsored celebrations, and people chanting “Allahu Akbar! (God is Greatest!).” However, many could be heard shouting “Death to the dictator!” and “Death to the Islamic Republic” on videos posted on social media.

    The social media posts could not be verified independently.

    Government television on Saturday aired live footage of the state rallies around the country.

    In Tehran, domestic-made anti-ballistic missiles, a drone, an anti-submarine cruiser, and other military equipment were on display as part of the celebrations.

    “People have realized that the enemy’s problem is not woman, life, or freedom,” Raisi said in a live televised speech at Tehran’s Azadi Square, referring to the protesters’ signature slogan.

    “Rather, they want to take our independence,” he said.

    His speech was frequently interrupted by chants of “Death to America” – a trademark slogan at state rallies. The crowd also chanted “Death to Israel.”

    Raisi accused the “enemies” of promoting “the worst kind of vulgarity, which is homosexuality”.

    Adelkhah, who had been in prison since 2019, was one of seven French nationals detained in Iran, a factor that has worsened relations between Paris and Tehran in recent months.

    She was sentenced in 2020 to five years in prison on national security charges. She was moved to house arrest later but in January returned to jail. Adelkhah has denied the charges.

    Meysami’s release came a week after supporters warned that he risked dying because of his hunger strike. He was arrested in 2018 for protesting against the compulsory wearing of the hijab.

    In announcing Adelkhah’s release on Friday, the French foreign ministry called that her freedoms be restored, “including returning to France if she wishes.”

    “Legally, her file is considered completed, and legally there should be no problem to leave the country, but this issue has to be reviewed. So … it is not clear how long it will take,” said her lawyer, Hojjat Kermani.

    [ad_2]

    Source link