ReportWire

Tag: Software

  • New reversal by Twitter after move sparked MTA withdrawal

    New reversal by Twitter after move sparked MTA withdrawal

    [ad_1]

    In an about-face, Twitter says it has restored free access to a key tool for verified government and “publicly owned” services so they can tweet weather, transit and other alerts

    ByBARBARA ORTUTAY AP Technology Writer

    In an about-face, Twitter says it has restored free access to a key tool for verified government and “publicly owned” services so they can tweet weather, transit and other alerts after New York City’s transit agency said earlier this week it would no longer use the platform for its service advisories.

    The Metropolitan Transportation Authority is among countless official and unofficial accounts that abruptly lost access to Twitter’s API, or application programming interface, to send out automated alerts about service changes and emergencies last week. By Thursday afternoon, senior executives agreed to cease publishing service alerts to the platform altogether.

    The decision put the country’s largest transportation network among a growing number of accounts, from National Public Radio to Elton John, who have reduced their Twitter presence or left the platform since its takeover by Elon Musk.

    Twitter had signaled that the days of private accounts disseminating troves of information at no cost may be ending. Last month, the company announced a new pricing system that would charge for access to its API, which is used by accounts that post frequent alerts, such as transit and weather agencies.

    MTA officials estimated the cost could run as high as $50,000 a month. For a transit agency that faces a multibillion dollar deficit, paying that much raised concerns.

    So last Thursday, the MTA told its 1 million Twitter followers that it will no longer use the platform for service alerts and information.

    On Tuesday, Twitter backtracked and announced that “Verified gov or publicly owned services who tweet weather alerts, transport updates and emergency notifications may use the API, for these critical purposes, for free.”

    In recent days, MTA officials have been in touch with Twitter’s development team, though the agency has not said whether it will return to publishing service alerts on Twitter in light of the change.

    A representative for the MTA did not immediately respond to a message for comment.

    __

    Associated Press Writer Jake Offenhartz contributed to this story from New York.

    [ad_2]

    Source link

  • New reversal by Twitter after move sparked MTA withdrawal

    New reversal by Twitter after move sparked MTA withdrawal

    [ad_1]

    In an about-face, Twitter says it has restored free access to a key tool for verified government and “publicly owned” services so they can tweet weather, transit and other alerts

    ByBARBARA ORTUTAY AP Technology Writer

    In an about-face, Twitter says it has restored free access to a key tool for verified government and “publicly owned” services so they can tweet weather, transit and other alerts after New York City’s transit agency said earlier this week it would no longer use the platform for its service advisories.

    The Metropolitan Transportation Authority is among countless official and unofficial accounts that abruptly lost access to Twitter’s API, or application programming interface, to send out automated alerts about service changes and emergencies last week. By Thursday afternoon, senior executives agreed to cease publishing service alerts to the platform altogether.

    The decision put the country’s largest transportation network among a growing number of accounts, from National Public Radio to Elton John, who have reduced their Twitter presence or left the platform since its takeover by Elon Musk.

    Twitter had signaled that the days of private accounts disseminating troves of information at no cost may be ending. Last month, the company announced a new pricing system that would charge for access to its API, which is used by accounts that post frequent alerts, such as transit and weather agencies.

    MTA officials estimated the cost could run as high as $50,000 a month. For a transit agency that faces a multibillion dollar deficit, paying that much raised concerns.

    So last Thursday, the MTA told its 1 million Twitter followers that it will no longer use the platform for service alerts and information.

    On Tuesday, Twitter backtracked and announced that “Verified gov or publicly owned services who tweet weather alerts, transport updates and emergency notifications may use the API, for these critical purposes, for free.”

    In recent days, MTA officials have been in touch with Twitter’s development team, though the agency has not said whether it will return to publishing service alerts on Twitter in light of the change.

    A representative for the MTA did not immediately respond to a message for comment.

    __

    Associated Press Writer Jake Offenhartz contributed to this story from New York.

    [ad_2]

    Source link

  • Students are turning to ChatGPT for study help, and Chegg stock is plummeting 30%

    Students are turning to ChatGPT for study help, and Chegg stock is plummeting 30%

    [ad_1]

    Chegg Inc. shares plunged more than 30% Monday afternoon and were headed toward their lowest price since 2017, after the online-education company’s forecast called for an unexpected revenue decline as students begin to use ChatGPT.

    Chegg CHGG reported first-quarter earnings of $2.2 million, or 2 cents a share, on net revenue of $187.6 million, down from $202.2 million a year ago. After adjusting for stock compensation and other effects, the company reported earnings of 27 cents a share, down from 32 cents a share in the same…

    [ad_2]

    Source link

  • Telegram app back on in Brazil after judge lifts suspension

    Telegram app back on in Brazil after judge lifts suspension

    [ad_1]

    RIO DE JANEIRO, Brazil — Internet providers and wireless carriers in Brazil stopped blocking Telegram on Saturday after a federal judge partially revised a ruling suspending the social media app over its failure to surrender data on neo-Nazi activity.

    However, the judge kept in place a daily fine of $1 million reais (about $200,000) for Telegram’s refusal to provide the data, according to a press statement provided by the federal court that issued the ruling.

    Complete suspension “is not reasonable, given the broad impact throughout the national territory on the freedom of communication of thousands of people who are absolutely strangers to the facts under investigation,” judge Flávio Lucas was quoted as saying in the statement.

    Telegram had been temporarily suspended pursuant to a police inquiry into school shootings in November, when a former student armed with a semiautomatic pistol and wearing a bulletproof vest fatally shot three people and wounded 13 after barging into two schools in the small town of Aracruz in Espirito Santo state.

    The 16-year-old is believed to have been a member of extremist channels on Telegram, where tutorials on murder and the manufacture of bombs were disseminated, the court’s statement said.

    Federal police ordered Telegram to provide details on names, tax identity numbers, profile photos, bank information and registered credit cards of channel members and later disputed Telegram’s claim that it could not comply because the channel had been suspended, the court statement said.

    Telegram founder and CEO Pavel Durov said in a statement Thursday that the company was appealing the Brazil-wide ban ordered the previous day, claiming compliance was “technologically impossible” and arguing that it is Telegram’s mission is to protect privacy and free speech.

    The company says in an online FAQ that it has never shared data on users with any government.

    It’s unclear how much of the requested data Telegram is able to provide. Only a phone number is required to create a Telegram account and a pseudonyms are routinely used. Further, beginning in December, Telegram offered the option of creating accounts with anonymous numbers.

    The court statement noted Telegram’s “past clashes with the judiciary” in Brazil. Last year, Supreme Court Justice Alexandre de Moraes ordered a nationwide shutdown of Telegram, arguing it hadn’t cooperated with authorities. It lasted two days and was lifted after Durov blamed his company’s initial lack of response on a communications snafu.

    “Technology companies need to understand that cyberspace cannot be a free territory, a different world … with its own rules created and managed by the agents who commercially exploit it,” Lucas, the judge in the current case, said in Saturday’s statement.

    Brazil has been grappling with a wave of school attacks. There have been almost two dozen attacks or violent episodes in schools since 2000, half of them in the last 12 months, including the killing of four children at a day care center April 5.

    Brazil’s federal government has strived to stamp out school violence with a particular focus on the influence of social media. The goal is to prevent further incidents, particularly holding platforms responsible for failing to remove content that allegedly incites violence.

    Regulation of social media platforms was a recurring theme earlier this month when President Luiz Inácio Lula da Silva met with his Cabinet ministers, Supreme Court justices, governors and mayors.

    Telegram has been blocked in the past by other governments, including Iran, China and Russia.

    Durov, an ethnic Russian whose company is based in the United Arab Emirates, has managed to coexist with the Kremlin despite its crackdown on speech and Western media following Moscow’s invasion of Ukraine last year.

    So-called “patriotic” hackers loyal to the Kremlin use the app to organize cyberattacks on Ukrainian and NATO targets. The other side uses it to fight back.

    Security researchers and intelligence agencies regularly track certain Telegram groups, focusing on ransomware gangs and other cybercriminals, disinformation purveyors, terror groups and others inciting violence.

    ___

    AP Technology Writer Frank Bajak in Boston contributed to this report.

    [ad_2]

    Source link

  • Telegram app back on in Brazil after judge lifts suspension

    Telegram app back on in Brazil after judge lifts suspension

    [ad_1]

    RIO DE JANEIRO, Brazil — Internet providers and wireless carriers in Brazil stopped blocking Telegram on Saturday after a federal judge partially revised a ruling suspending the social media app over its failure to surrender data on neo-Nazi activity.

    However, the judge kept in place a daily fine of $1 million reais (about $200,000) for Telegram’s refusal to provide the data, according to a press statement provided by the federal court that issued the ruling.

    Complete suspension “is not reasonable, given the broad impact throughout the national territory on the freedom of communication of thousands of people who are absolutely strangers to the facts under investigation,” judge Flávio Lucas was quoted as saying in the statement.

    Telegram had been temporarily suspended pursuant to a police inquiry into school shootings in November, when a former student armed with a semiautomatic pistol and wearing a bulletproof vest fatally shot three people and wounded 13 after barging into two schools in the small town of Aracruz in Espirito Santo state.

    The 16-year-old is believed to have been a member of extremist channels on Telegram, where tutorials on murder and the manufacture of bombs were disseminated, the court’s statement said.

    Federal police ordered Telegram to provide details on names, tax identity numbers, profile photos, bank information and registered credit cards of channel members and later disputed Telegram’s claim that it could not comply because the channel had been suspended, the court statement said.

    Telegram founder and CEO Pavel Durov said in a statement Thursday that the company was appealing the Brazil-wide ban ordered the previous day, claiming compliance was “technologically impossible” and arguing that it is Telegram’s mission is to protect privacy and free speech.

    The company says in an online FAQ that it has never shared data on users with any government.

    It’s unclear how much of the requested data Telegram is able to provide. Only a phone number is required to create a Telegram account and a pseudonyms are routinely used. Further, beginning in December, Telegram offered the option of creating accounts with anonymous numbers.

    The court statement noted Telegram’s “past clashes with the judiciary” in Brazil. Last year, Supreme Court Justice Alexandre de Moraes ordered a nationwide shutdown of Telegram, arguing it hadn’t cooperated with authorities. It lasted two days and was lifted after Durov blamed his company’s initial lack of response on a communications snafu.

    “Technology companies need to understand that cyberspace cannot be a free territory, a different world … with its own rules created and managed by the agents who commercially exploit it,” Lucas, the judge in the current case, said in Saturday’s statement.

    Brazil has been grappling with a wave of school attacks. There have been almost two dozen attacks or violent episodes in schools since 2000, half of them in the last 12 months, including the killing of four children at a day care center April 5.

    Brazil’s federal government has strived to stamp out school violence with a particular focus on the influence of social media. The goal is to prevent further incidents, particularly holding platforms responsible for failing to remove content that allegedly incites violence.

    Regulation of social media platforms was a recurring theme earlier this month when President Luiz Inácio Lula da Silva met with his Cabinet ministers, Supreme Court justices, governors and mayors.

    Telegram has been blocked in the past by other governments, including Iran, China and Russia.

    Durov, an ethnic Russian whose company is based in the United Arab Emirates, has managed to coexist with the Kremlin despite its crackdown on speech and Western media following Moscow’s invasion of Ukraine last year.

    So-called “patriotic” hackers loyal to the Kremlin use the app to organize cyberattacks on Ukrainian and NATO targets. The other side uses it to fight back.

    Security researchers and intelligence agencies regularly track certain Telegram groups, focusing on ransomware gangs and other cybercriminals, disinformation purveyors, terror groups and others inciting violence.

    ___

    AP Technology Writer Frank Bajak in Boston contributed to this report.

    [ad_2]

    Source link

  • NYC transit agency pulls the brake on Twitter service alerts

    NYC transit agency pulls the brake on Twitter service alerts

    [ad_1]

    NEW YORK — Shortly after midnight Thursday, several New York City subway trains slowed to a crawl as emergency crews tended to a person discovered on the tracks in Manhattan.

    The delays were flagged for the Metropolitan Transportation Authority’s rail control center, where a customer service agent typed up a straightforward warning for early-morning riders to consider alternate routes.

    But while the message was quickly posted to the MTA’s website and app, the alert never made it to the subway system’s Twitter account, with its 1 million followers. The agency’s access to the platform’s back-end, officials soon learned, had been suspended by Twitter without warning.

    It was the second such breakdown in two weeks and the reaction inside the MTA was swift. By Thursday afternoon, senior executives agreed to cease publishing service alerts to the platform altogether.

    The decision put the country’s largest transportation network among a growing number of accounts, from National Public Radio to Elton John, who have reduced their Twitter presence or left the platform since its takeover by Elon Musk.

    It also caught riders, and some in the MTA, off guard, even as at least one other transit agency considered following suit.

    “The train schedule is always messed up. It’s convenient to have the answers all in one place,” lamented Brandon Gubitosa, a Queens resident, who said he checked for service alerts on the MTA’s Twitter feed before leaving for his commute each morning. “There should be some responsibility for Twitter to make sure this service doesn’t disappear.”

    For its part, Twitter has signaled that the days of private accounts disseminating troves of information at no cost may be ending. Last month, the company announced a new pricing system that would charge for access to its application programming interface, or API, which is used by accounts that post frequent alerts, such as transit and weather agencies.

    MTA officials estimated the cost could run as high as $50,000 a month. For a transit agency that faces a multibillion dollar deficit, paying that much raised concerns.

    “The amount that is being posed is astronomical,” said Shanifah Rieara, the MTA’s acting chief customer officer. “We are all about bringing ridership back. We should not be paying to communicate service alerts to our customers.”

    Those that don’t agree to pay, Twitter warned, will begin to see their service “deprecate,” a process that some agencies say is already underway.

    A spokesperson for Chicago Transit Authority confirmed they were considering ending alerts, citing what they described as Twitter’s “diminished” effectiveness for real-time transit information.

    On Friday, the Bay Area Rapid Transit System announced its alerts were temporarily unavailable due to technological issues, though a spokesperson said they hoped to have the issue fixed soon.

    Beyond the pricing, MTA officials offered other reasons for leaving Twitter, including the added vitriol and the move away from a chronological timeline.

    They also pointed to a desire to push customers toward existing in-house products that provide the same information about service disruptions, such as a pair of apps known as MYmta and TrainTime. They provide times for the subway and commuter rail system, respectively.

    A request for comment was sent to Twitter’s communications office. Twitter responded only with an automated reply.

    The MTA’s decision to scale back its use of Twitter comes as many institutional users of the platform wrestle with changes Musk has made in an effort to make the service profitable, including asking users to pay for checkmarks on their accounts that formerly served as a form of identity verification.

    Service alerts are valuable tools on New York City’s massive rail and bus system, where mechanical problems, track fires, repair work and other issues can cause subway trains to get delayed or diverted to lines where they don’t ordinarily run.

    Only a few years ago, riders were often left in the dark about those changes until they were already on subway platforms, where transit workers would bark announcements through scratchy speakers or hang paper signs about changes.

    Now, information about service, including the real-time position of subway cars, are available through a variety of electronic sources, both on people’s smartphones and in stations. Consumer research has suggested that subway riders seeking information on Twitter account for a relatively narrow slice of riders.

    Last month, more than 3 million people visited the MTA’s homepage, which also has the updates on service disruptions that once appeared on Twitter, and nearly 2 million others used the two apps, according to an authority spokesperson.

    In addition to service alerts, the MTA’s customer service agents use Twitter to provide real-time responses to questions and concerns — a back-and-forth that often serves to calm riders’ frayed nerves.

    Last month, the agency sent out 21,000 replies on Twitter — responses that offered a valuable public window into the MTA’s customer service policy, according to Rachael Fauss, a senior policy advisor at the watchdog group Reinvent Albany.

    “There was a personalization to it that was interesting,” Fauss said. “There’s an opportunity to see how the MTA responds to riders that you don’t get without Twitter.”

    For now, the agency said it would continue responding to customers on Twitter. But officials acknowledged there were no guarantees about whether that would remain the case long term.

    “The MTA gets blamed for a host of things, so we need a reliant and resilient way to communicate,” said Rieara. “In (Twitter’s) current stage, we can’t put our customers in a position to be guessing whether or not they have the most updated information.”

    [ad_2]

    Source link

  • NYC transit agency pulls the brake on Twitter service alerts

    NYC transit agency pulls the brake on Twitter service alerts

    [ad_1]

    NEW YORK — Shortly after midnight Thursday, several New York City subway trains slowed to a crawl as emergency crews tended to a person discovered on the tracks in lower Manhattan.

    The delays were flagged for the Metropolitan Transportation Authority’s rail control center, where a customer service agent typed up a straightforward warning for early-morning riders to consider alternate routes.

    But while the message was quickly posted to the MTA’s website and app, the alert never made it to the subway system’s Twitter account, with its 1 million followers. The agency’s access to the platform’s back-end, officials soon learned, had been suspended by Twitter without warning.

    It was the second such breakdown in two weeks and the reaction inside the MTA was swift. By Thursday afternoon, senior executives agreed to cease publishing service alerts to the platform altogether.

    The decision put the country’s largest transportation network among a growing number of accounts, from National Public Radio to Elton John, who have reduced their Twitter presence or left the platform since its takeover by Elon Musk.

    It also caught riders, and some in the MTA, off guard, even as other transit agencies considered following suit.

    “The train schedule is always messed up. It’s convenient to have the answers all in one place,” lamented Brandon Gubitosa, a Queens resident, who said he checked for service alerts on the MTA’s Twitter feed before leaving for his commute each morning. “There should be some responsibility for Twitter to make sure this service doesn’t disappear.”

    For its part, Twitter has signaled that the days of private accounts disseminating troves of information at no cost may be ending. Last month, the company announced a new pricing system that would charge for access to its application programming interface, or API, which is used by accounts that post frequent alerts, such as transit and weather agencies.

    MTA officials estimated the cost could run as high as $50,000 a month. For a transit agency that faces a multi-billion dollar deficit, paying that much raised concerns.

    “The amount that is being posed is astronomical,” said Shanifah Rieara, the MTA’s acting chief customer officer. “We are all about bringing ridership back. We should not be paying to communicate service alerts to our customers.”

    Those that don’t agree to pay, Twitter warned, will begin to see their service “deprecate,” a process that some agencies say is already underway.

    On Friday, the Bay Area Rapid Transit System announced its alerts were temporarily unavailable due to technological issues, though a spokesperson said they hoped to have the issue fixed soon. A spokesperson for Chicago Transit Authority confirmed they were considering ending alerts, citing what they described as Twitter’s “diminished” effectiveness for real-time transit information.

    Beyond the pricing, MTA officials offered other reasons for leaving Twitter, including the added vitriol and the move away from a chronological timeline. They also pointed to a desire to push customers toward in-house products, such as a pair of apps known as MYmta and TrainTime. They provide times for the subway and commuter rail system, respectively.

    A request for comment was sent to Twitter’s communications office. Twitter responded only with an automated reply.

    The MTA’s decision to scale back its use of Twitter comes as many institutional users of the platform wrestle with changes Musk has made in an effort to make the service profitable, including asking users to pay for identity verification checkmarks.

    Service alerts are valuable tools on New York City’s massive rail and bus system, where mechanical problems, track fires, repair work and other issues can cause subway trains to get delayed or diverted to lines where they don’t ordinarily run.

    Only a few years ago, riders were often left in the dark about those changes until they were already on subway platforms, where transit workers would bark announcements through scratchy speakers or hang paper signs about changes.

    Now, information about service, including the real-time position of subway cars, are available through a variety of electronic sources, both on people’s smartphones and in stations. Consumer research has suggested that subway riders seeking information on Twitter account for a relatively narrow slice of riders.

    Last month, more than 3 million people visited the MTA’s homepage and nearly 2 million others used the two apps, according to an authority spokesperson.

    In addition to service alerts, the MTA’s customer service agents use Twitter to provide real-time responses to questions and concerns — a back-and-forth that often serves to calm riders’ frayed nerves.

    Last month, the agency sent out 21,000 replies on Twitter — responses that offered a valuable public window into the MTA’s customer service policy, according to Rachael Fauss, a senior policy advisor at the watchdog group Reinvent Albany.

    “There was a personalization to it that was interesting,” Fauss said. “There’s an opportunity to see how the MTA responds to riders that you don’t get without Twitter.”

    For now, the agency said it would continue responding to customers on Twitter. But officials acknowledged there were no guarantees about whether that would remain the case long term.

    “The MTA gets blamed for a host of things, so we need a reliant and resilient way to communicate,” said Rieara. “In (Twitter’s) current stage, we can’t put our customers in a position to be guessing whether or not they have the most updated information.”

    [ad_2]

    Source link

  • NYC transit agency pulls the brake on Twitter service alerts

    NYC transit agency pulls the brake on Twitter service alerts

    [ad_1]

    NEW YORK — Shortly after midnight Thursday, several New York City subway trains slowed to a crawl as emergency crews tended to a person discovered on the tracks in lower Manhattan.

    The delays were flagged for the Metropolitan Transportation Authority’s rail control center, where a customer service agent typed up a straightforward warning for early-morning riders to consider alternate routes.

    But while the message was quickly posted to the MTA’s website and app, the alert never made it to the subway system’s Twitter account, with its 1 million followers. The agency’s access to the platform’s back-end, officials soon learned, had been suspended by Twitter without warning.

    It was the second such breakdown in two weeks and the reaction inside the MTA was swift. By Thursday afternoon, senior executives agreed to cease publishing service alerts to the platform altogether.

    The decision put the country’s largest transportation network among a growing number of accounts, from National Public Radio to Elton John, who have reduced their Twitter presence or left the platform since its takeover by Elon Musk.

    It also caught riders, and some in the MTA, off guard, even as other transit agencies considered following suit.

    “The train schedule is always messed up. It’s convenient to have the answers all in one place,” lamented Brandon Gubitosa, a Queens resident, who said he checked for service alerts on the MTA’s Twitter feed before leaving for his commute each morning. “There should be some responsibility for Twitter to make sure this service doesn’t disappear.”

    For its part, Twitter has signaled that the days of private accounts disseminating troves of information at no cost may be ending. Last month, the company announced a new pricing system that would charge for access to its application programming interface, or API, which is used by accounts that post frequent alerts, such as transit and weather agencies.

    MTA officials estimated the cost could run as high as $50,000 a month. For a transit agency that faces a multi-billion dollar deficit, paying that much raised concerns.

    “The amount that is being posed is astronomical,” said Shanifah Rieara, the MTA’s acting chief customer officer. “We are all about bringing ridership back. We should not be paying to communicate service alerts to our customers.”

    Those that don’t agree to pay, Twitter warned, will begin to see their service “deprecate,” a process that some agencies say is already underway.

    On Friday, the Bay Area Rapid Transit System announced its alerts were temporarily unavailable due to technological issues, though a spokesperson said they hoped to have the issue fixed soon. A spokesperson for Chicago Transit Authority confirmed they were considering ending alerts, citing what they described as Twitter’s “diminished” effectiveness for real-time transit information.

    Beyond the pricing, MTA officials offered other reasons for leaving Twitter, including the added vitriol and the move away from a chronological timeline. They also pointed to a desire to push customers toward in-house products, such as a pair of apps known as MYmta and TrainTime. They provide times for the subway and commuter rail system, respectively.

    A request for comment was sent to Twitter’s communications office. Twitter responded only with an automated reply.

    The MTA’s decision to scale back its use of Twitter comes as many institutional users of the platform wrestle with changes Musk has made in an effort to make the service profitable, including asking users to pay for identity verification checkmarks.

    Service alerts are valuable tools on New York City’s massive rail and bus system, where mechanical problems, track fires, repair work and other issues can cause subway trains to get delayed or diverted to lines where they don’t ordinarily run.

    Only a few years ago, riders were often left in the dark about those changes until they were already on subway platforms, where transit workers would bark announcements through scratchy speakers or hang paper signs about changes.

    Now, information about service, including the real-time position of subway cars, are available through a variety of electronic sources, both on people’s smartphones and in stations. Consumer research has suggested that subway riders seeking information on Twitter account for a relatively narrow slice of riders.

    Last month, more than 3 million people visited the MTA’s homepage and nearly 2 million others used the two apps, according to an authority spokesperson.

    In addition to service alerts, the MTA’s customer service agents use Twitter to provide real-time responses to questions and concerns — a back-and-forth that often serves to calm riders’ frayed nerves.

    Last month, the agency sent out 21,000 replies on Twitter — responses that offered a valuable public window into the MTA’s customer service policy, according to Rachael Fauss, a senior policy advisor at the watchdog group Reinvent Albany.

    “There was a personalization to it that was interesting,” Fauss said. “There’s an opportunity to see how the MTA responds to riders that you don’t get without Twitter.”

    For now, the agency said it would continue responding to customers on Twitter. But officials acknowledged there were no guarantees about whether that would remain the case long term.

    “The MTA gets blamed for a host of things, so we need a reliant and resilient way to communicate,” said Rieara. “In (Twitter’s) current stage, we can’t put our customers in a position to be guessing whether or not they have the most updated information.”

    [ad_2]

    Source link

  • NYC transit agency ends Twitter alerts, says it’s unreliable

    NYC transit agency ends Twitter alerts, says it’s unreliable

    [ad_1]

    New York City’s Metropolitan Transit Agency, which for 14 years has provided real-time information on subway, train and bus service outages, delays and other important updates for its 1.3 million followers, will stop using Twitter for such alerts

    New York City’s Metropolitan Transit Agency, which for 14 years has provided real-time information on service outages, delays and other important transit updates for its 1.3 million Twitter followers, will no longer do so.

    The NYC MTA said Thursday that “Twitter is no longer reliable for providing the consistent updates riders expect.” For this reason, the agency tweeted, it will no longer use the platform for service alerts and information.

    The MTA also listed other ways subway, train and bus riders can get reliable transit information, including through its mta.info site, text alerts and its Weekender newsletter for weekend advisories.

    Twitter has long been a way for people to keep track of train delays, news and weather alerts or the latest crime warnings from their local police department.

    But when the Elon Musk-owned platform started stripping blue verification check marks this month from accounts that don’t pay a monthly fee, it left public agencies and other organizations around the world scrambling to figure out a way to show they’re trustworthy and avoid impersonators.

    New York City’s government Twitter account, for instance, pinned a tweet to its profile telling users that it is an “authentic Twitter account representing the New York City Government This is the only account for @NYCGov run by New York City government.”

    While Twitter is now offering gold checks for “verified organizations” and gray checks for government organizations and their affiliates, the former come at a cost too steep to justify for many agencies.

    The MTA’s affiliate Twitter accounts, such as the @NYCTSubway account that replied to passengers, will also stop providing real-time alerts, but encouraged riders to find other ways to get in touch, such as through WhatsApp. ___

    This story has been corrected to show that the Metropolitan Transit Agency has been providing real-time information on Twitter for 14 years, not 13.

    [ad_2]

    Source link

  • Deutsche Boerse Makes Offer for SimCorp

    Deutsche Boerse Makes Offer for SimCorp

    [ad_1]

    By Sarah Sloat

    Deutsche Boerse SE said Thursday it would make a voluntary takeover offer for Danish software company SimCorp AS for a total 3.9 billion euros ($4.31 billion).

    The all-cash offer of DKK735 ($108.86) per share represents a 38.9% premium over the closing price of DKK529, and a 45.3% premium over the three-month volume-weighted…

    [ad_2]

    Source link

  • Why the U.K. is blocking Microsoft’s deal for Activision and what comes next

    Why the U.K. is blocking Microsoft’s deal for Activision and what comes next

    [ad_1]

    A U.K. regulator made the surprising decision Wednesday to block Microsoft Corp.’s deal for Activision Blizzard Inc. in a further sign of resistance to the power of Big Tech.

    The U.K.’s Competition and Markets Authority announced Wednesday that it would prohibit the $69 billion deal as the merger could hurt competition in the nascent market for cloud gaming. The decision comes after the agency said in late March that it no longer thought the deal would threaten console gaming, which is a vastly larger and more established…

    [ad_2]

    Source link

  • Microsoft stock zooms toward highest prices in a year after strong earnings, forecast

    Microsoft stock zooms toward highest prices in a year after strong earnings, forecast

    [ad_1]

    Microsoft Corp. shares headed toward their highest prices in more than year in Tuesday’s extended session, after the software giant reported better-then-expected profit and revenue and guided for continued strong results in an uncertain economy.

    Microsoft MSFT reported fiscal third-quarter profit of $18.3 billion, or $2.45 a share, up from $2.22 a share a year ago. Revenue grew to $52.86 billion from $49.36 billion in the same quarter last year. Analysts on average were expecting earnings of $2.24 a share on sales of $51.02…

    [ad_2]

    Source link

  • Big Tech crackdown looms as EU, UK ready new rules

    Big Tech crackdown looms as EU, UK ready new rules

    [ad_1]

    TikTok, Twitter, Facebook, Google, Amazon and other Big Tech companies are facing rising pressure in Europe as London and Brussels advanced new rules to curb the power of digital companies

    TikTok, Twitter, Facebook, Google, Amazon and other Big Tech companies are facing rising pressure from European authorities as London and Brussels advanced new rules Tuesday to curb the power of digital companies.

    The U.K. government unveiled draft legislation that would give regulators more power to protect consumers from online scams and fake reviews and boost digital competition.

    Meanwhile, the European Union was set to release a list of the 19 biggest online platforms and search engines that face extra scrutiny and obligations under the 27-nation bloc’s landmark digital rules taking effect later this year.

    The updates help solidify Europe’s reputation as the global leader in efforts to rein in the power of social media companies and other digital platforms.

    Britain’s Digital Markets, Competition and Consumers bill proposes giving watchdogs more teeth to draw down the dominance of tech companies, backed by the threat of fines worth up to 10% of their annual revenue.

    Under the proposals, online platforms and search engines can be required to give rivals access to their data or be more transparent about how their app stores and marketplaces work.

    The rules would make it illegal to hire someone to write a fake review or allow the posting of online consumer reviews “without taking reasonable steps” to verify they’re genuine. They also would make it easier for consumers get out of online subscriptions.

    The new rules, which still need go through the legislative process and secure parliamentary approval, would apply only to companies with 25 million pounds in global revenue or 1 billion pounds in U.K. revenue.

    Also Tuesday, the European Commission, the EU’s executive arm, is set to designate 19 of the biggest online platforms or search engines that will have to take extra steps to clean up illegal content and disinformation and keep users safe online.

    Violations of the bloc’s new Digital Services Act could result in fines worth up to 6% of a company’s annual global revenue — amounting to billions of dollars — or even a ban on operating in the EU.

    Google, Twitter, TikTok, Apple, Facebook and Instagram have already disclosed that they have more than 45 million users in Europe, putting them over the bloc’s threshold.

    [ad_2]

    Source link

  • Let’s Go Down The Latest Spotify ‘Fake Artist’ Rabbit Hole

    Let’s Go Down The Latest Spotify ‘Fake Artist’ Rabbit Hole

    [ad_1]

    Recently, Spotify’s “fake artist” problem, first spotted as far back as 2017, has been a topic of conversation yet again, with a playlist of 49 virtually identical songs from different artists making the rounds on the internet. And no, this isn’t a snarky jab about how all pop music is built on the same general concepts; these songs appear to be similar versions of the same piece of poorly produced music, each differentiated by random changes in pitch.

    Between its gargantuan size and anemic royalty payouts, Spotify has rarely been without controversy. As a veritable kingmaker operating, allegedly, by the invisible hand of the music marketplace, attempts to mine the service for money are nothing new. Sometimes large corporations are suspected of such behavior, including Spotify itself (which it staunchly denies). Clever artists have also deployed tongue-in-cheek stunts to try and game the system, which is widely seen as being brutally unfair to indie musicians. Recently, songs from no-name artists have been found to bear striking similarities to one another. They’re clearly the same piece of music, starting the same way and using the same melodic motifs, though the album art, artist name, and base pitch of each version varies.

    On Twitter, media producer Adam Faze shared a strange discovery, collating 49 seemingly identical songs into a public Spotify playlist titled “these are all the same song.”

    One quick listen and, yeah, there are shades of difference, mostly in terms of pitch. But these are undeniably all the same song.

    As many pointed out in Faze’s replies, it all sounds like the product of low-effort generative music techniques or even AI productions—and, no, not the more respectable, exploratory kind that composers, electronic musicians, and visual artists have experimented with for years.

    Another odd quirk of the songs found in Faze’s cursed playlist is that each track features similarly styled, bizarre,stock images for the album art.

    It would also seem that this phenomenon is not exclusive to Spotify. As musician Zoë Keating discovered, Apple Music also seems to have pitch-shifted renditions of classical music attributed to faux artists.

    Kotaku has reached out to Spotify and Apple for comment.

    While just about anyone can upload music to streaming services with something like a Distrokid account, Universal Media Group has recently called on Spotify to take a stance against AI-generated music that lifts the likeness of established artists to create new music. As with AI-generated visual art, however, these problems aren’t likely to fade away.

    [ad_2]

    Claire Jackson

    Source link

  • SAP Cloud Sales Miss and Software Giant Cuts Outlook. Why the Stock Is Rising.

    SAP Cloud Sales Miss and Software Giant Cuts Outlook. Why the Stock Is Rising.

    [ad_1]



    SAP


    missed expectations for sales in its key cloud division and cut its outlook in first-quarter earnings released Friday. But the stock is still rising after the German software giant beat estimates for overall profit and revenue.



    SAP


    (ticker: SAP) reported earnings of €1.27 ($1.39) a share on revenue of €7.44 billion in the first three months of 2023. Analysts surveyed by FactSet had expected profit of €1.10 on sales of €7.30 billion.

    [ad_2]

    Source link

  • Tenstreet Announces Acquisition of TruckMap, Launch of New Rewards & True Fuel Programs

    Tenstreet Announces Acquisition of TruckMap, Launch of New Rewards & True Fuel Programs

    [ad_1]

    Tenstreet, the leading provider of driver recruiting software and workflow solutions for the transportation industry, announced at its 2023 User Conference in Las Vegas that it has acquired transportation routing company TruckMap.

    TruckMap is a mobile app for truck drivers that provides updates on parking availability, access to local truck services, and truck-optimized GPS routing. These functionalities will be incorporated into Tenstreet’s Driver Pulse App to make the platform even more useful for drivers on the road, joining an existing mobile job application, online training courses, fuel pricing information, and several other features that help over a million drivers each year manage their careers and drive more effectively. TruckMap is based in Chicago, Illinois.

    Tenstreet also announced enhancements to its Driver Pulse App at the User Conference, one of the more significant changes being a new Rewards platform that lets carriers grant points to drivers at significant milestones and to reinforce positive habits. Applause Rewards can be given to drivers for behaviors like receiving a customer compliment, helping another driver or driving safely for several months in a row. Points can also be automatically delivered for events like birthdays and work anniversaries. These points can then be redeemed for gift cards at driver-preferred vendors like Amazon, Bass Pro Shops, Walmart, Target, Best Buy, DoorDash, Lowe’s, Petco, and more.

    The Rewards functionality can also be used to run sweepstakes for drivers. Carriers determine tasks that drivers can complete to earn entries into weekly and monthly sweepstakes, gamifying behaviors they want to encourage and keeping drivers engaged and rewarded. 

    Additionally, Tenstreet introduced a set of fuel-efficiency offerings as part of its True Fuel service. The system allows a more equitable assessment of driver fuel-usage than a traditional miles-per-gallon approach. The new offerings allow carriers one-tap implementation of a comprehensive fuel-incentive program, powered by Driver Pulse and the newly introduced Rewards system. Carriers can also leverage telematics-based fuel-usage data for advanced fuel efficiency. All the tiers of True Fuel are built on the foundation of a decade of data gathering and machine learning and deliver thousands of dollars of fuel savings per truck annually.

    To learn more about these new acquisitions and features, reach out to sales@tenstreet.com. 

    About Tenstreet

    Tenstreet’s platform connects carriers and drivers, making it easier to fill trucks while staying compliant. We help thousands of motor carriers and private fleets to market, recruit, onboard, manage, and retain drivers. Since 2006, millions of drivers have used Tenstreet’s platform to quickly and securely apply for their next job.

    Source: Tenstreet

    [ad_2]

    Source link

  • Deepfake porn could be a growing problem amid AI race

    Deepfake porn could be a growing problem amid AI race

    [ad_1]

    NEW YORK — Artificial intelligence imaging can be used to create art, try on clothes in virtual fitting rooms or help design advertising campaigns.

    But experts fear the darker side of the easily accessible tools could worsen something that primarily harms women: nonconsensual deepfake pornography.

    Deepfakes are videos and images that have been digitally created or altered with artificial intelligence or machine learning. Porn created using the technology first began spreading across the internet several years ago when a Reddit user shared clips that placed the faces of female celebrities on the shoulders of porn actors.

    Since then, deepfake creators have disseminated similar videos and images targeting online influencers, journalists and others with a public profile. Thousands of videos exist across a plethora of websites. And some have been offering users the opportunity to create their own images — essentially allowing anyone to turn whoever they wish into sexual fantasies without their consent, or use the technology to harm former partners.

    The problem, experts say, grew as it became easier to make sophisticated and visually compelling deepfakes. And they say it could get worse with the development of generative AI tools that are trained on billions of images from the internet and spit out novel content using existing data.

    “The reality is that the technology will continue to proliferate, will continue to develop and will continue to become sort of as easy as pushing the button,” said Adam Dodge, the founder of EndTAB, a group that provides trainings on technology-enabled abuse. “And as long as that happens, people will undoubtedly … continue to misuse that technology to harm others, primarily through online sexual violence, deepfake pornography and fake nude images.”

    Noelle Martin, of Perth, Australia, has experienced that reality. The 28-year-old found deepfake porn of herself 10 years ago when out of curiosity one day she used Google to search an image of herself. To this day, Martin says she doesn’t know who created the fake images, or videos of her engaging in sexual intercourse that she would later find. She suspects someone likely took a picture posted on her social media page or elsewhere and doctored it into porn.

    Horrified, Martin contacted different websites for a number of years in an effort to get the images taken down. Some didn’t respond. Others took it down but she soon found it up again.

    “You cannot win,” Martin said. “This is something that is always going to be out there. It’s just like it’s forever ruined you.”

    The more she spoke out, she said, the more the problem escalated. Some people even told her the way she dressed and posted images on social media contributed to the harassment — essentially blaming her for the images instead of the creators.

    Eventually, Martin turned her attention towards legislation, advocating for a national law in Australia that would fine companies 555,000 Australian dollars ($370,706) if they don’t comply with removal notices for such content from online safety regulators.

    But governing the internet is next to impossible when countries have their own laws for content that’s sometimes made halfway around the world. Martin, currently an attorney and legal researcher at the University of Western Australia, says she believes the problem has to be controlled through some sort of global solution.

    In the meantime, some AI models say they’re already curbing access to explicit images.

    OpenAI says it removed explicit content from data used to train the image generating tool DALL-E, which limits the ability of users to create those types of images. The company also filters requests and says it blocks users from creating AI images of celebrities and prominent politicians. Midjourney, another model, blocks the use of certain keywords and encourages users to flag problematic images to moderators.

    Meanwhile, the startup Stability AI rolled out an update in November that removes the ability to create explicit images using its image generator Stable Diffusion. Those changes came following reports that some users were creating celebrity inspired nude pictures using the technology.

    Stability AI spokesperson Motez Bishara said the filter uses a combination of keywords and other techniques like image recognition to detect nudity and returns a blurred image. But it’s possible for users to manipulate the software and generate what they want since the company releases its code to the public. Bishara said Stability AI’s license “extends to third-party applications built on Stable Diffusion” and strictly prohibits “any misuse for illegal or immoral purposes.”

    Some social media companies have also been tightening up their rules to better protect their platforms against harmful materials.

    TikTok said last month all deepfakes or manipulated content that show realistic scenes must be labeled to indicate they’re fake or altered in some way, and that deepfakes of private figures and young people are no longer allowed. Previously, the company had barred sexually explicit content and deepfakes that mislead viewers about real-world events and cause harm.

    The gaming platform Twitch also recently updated its policies around explicit deepfake images after a popular streamer named Atrioc was discovered to have a deepfake porn website open on his browser during a livestream in late January. The site featured phony images of fellow Twitch streamers.

    Twitch already prohibited explicit deepfakes, but now showing a glimpse of such content — even if it’s intended to express outrage — “will be removed and will result in an enforcement,” the company wrote in a blog post. And intentionally promoting, creating or sharing the material is grounds for an instant ban.

    Other companies have also tried to ban deepfakes from their platforms, but keeping them off requires diligence.

    Apple and Google said recently they removed an app from their app stores that was running sexually suggestive deepfake videos of actresses to market the product. Research into deepfake porn is not prevalent, but one report released in 2019 by the AI firm DeepTrace Labs found it was almost entirely weaponized against women and the most targeted individuals were western actresses, followed by South Korean K-pop singers.

    The same app removed by Google and Apple had run ads on Meta’s platform, which includes Facebook, Instagram and Messenger. Meta spokesperson Dani Lever said in a statement the company’s policy restricts both AI-generated and non-AI adult content and it has restricted the app’s page from advertising on its platforms.

    In February, Meta, as well as adult sites like OnlyFans and Pornhub, began participating in an online tool, called Take It Down, that allows teens to report explicit images and videos of themselves from the internet. The reporting site works for regular images, and AI-generated content — which has become a growing concern for child safety groups.

    “When people ask our senior leadership what are the boulders coming down the hill that we’re worried about? The first is end-to-end encryption and what that means for child protection. And then second is AI and specifically deepfakes,” said Gavin Portnoy, a spokesperson for the National Center for Missing and Exploited Children, which operates the Take It Down tool.

    “We have not … been able to formulate a direct response yet to it,” Portnoy said.

    [ad_2]

    Source link

  • Microsoft Looks to ChatGPT AI to Transform Its Digital Ad Business

    Microsoft Looks to ChatGPT AI to Transform Its Digital Ad Business

    [ad_1]

    Microsoft Looks to ChatGPT AI to Transform Its Digital Ad Business

    [ad_2]

    Source link

  • Deepfake porn could be a growing problem amid AI race

    Deepfake porn could be a growing problem amid AI race

    [ad_1]

    NEW YORK — Artificial intelligence imaging can be used to create art, try on clothes in virtual fitting rooms or help design advertising campaigns.

    But experts fear the darker side of the easily accessible tools could worsen something that primarily harms women: nonconsensual deepfake pornography.

    Deepfakes are videos and images that have been digitally created or altered with artificial intelligence or machine learning. Porn created using the technology first began spreading across the internet several years ago when a Reddit user shared clips that placed the faces of female celebrities on the shoulders of porn actors.

    Since then, deepfake creators have disseminated similar videos and images targeting online influencers, journalists and others with a public profile. Thousands of videos exist across a plethora of websites. And some have been offering users the opportunity to create their own images — essentially allowing anyone to turn whoever they wish into sexual fantasies without their consent, or use the technology to harm former partners.

    The problem, experts say, grew as it became easier to make sophisticated and visually compelling deepfakes. And they say it could get worse with the development of generative AI tools that are trained on billions of images from the internet and spit out novel content using existing data.

    “The reality is that the technology will continue to proliferate, will continue to develop and will continue to become sort of as easy as pushing the button,” said Adam Dodge, the founder of EndTAB, a group that provides trainings on technology-enabled abuse. “And as long as that happens, people will undoubtedly … continue to misuse that technology to harm others, primarily through online sexual violence, deepfake pornography and fake nude images.”

    Noelle Martin, of Perth, Australia, has experienced that reality. The 28-year-old found deepfake porn of herself 10 years ago when out of curiosity one day she used Google to search an image of herself. To this day, Martin says she doesn’t know who created the fake images, or videos of her engaging in sexual intercourse that she would later find. She suspects someone likely took a picture posted on her social media page or elsewhere and doctored it into porn.

    Horrified, Martin contacted different websites for a number of years in an effort to get the images taken down. Some didn’t respond. Others took it down but she soon found it up again.

    “You cannot win,” Martin said. “This is something that is always going to be out there. It’s just like it’s forever ruined you.”

    The more she spoke out, she said, the more the problem escalated. Some people even told her the way she dressed and posted images on social media contributed to the harassment — essentially blaming her for the images instead of the creators.

    Eventually, Martin turned her attention towards legislation, advocating for a national law in Australia that would fine companies 555,000 Australian dollars ($370,706) if they don’t comply with removal notices for such content from online safety regulators.

    But governing the internet is next to impossible when countries have their own laws for content that’s sometimes made halfway around the world. Martin, currently an attorney and legal researcher at the University of Western Australia, says she believes the problem has to be controlled through some sort of global solution.

    In the meantime, some AI models say they’re already curbing access to explicit images.

    OpenAI says it removed explicit content from data used to train the image generating tool DALL-E, which limits the ability of users to create those types of images. The company also filters requests and says it blocks users from creating AI images of celebrities and prominent politicians. Midjourney, another model, blocks the use of certain keywords and encourages users to flag problematic images to moderators.

    Meanwhile, the startup Stability AI rolled out an update in November that removes the ability to create explicit images using its image generator Stable Diffusion. Those changes came following reports that some users were creating celebrity inspired nude pictures using the technology.

    Stability AI spokesperson Motez Bishara said the filter uses a combination of keywords and other techniques like image recognition to detect nudity and returns a blurred image. But it’s possible for users to manipulate the software and generate what they want since the company releases its code to the public. Bishara said Stability AI’s license “extends to third-party applications built on Stable Diffusion” and strictly prohibits “any misuse for illegal or immoral purposes.”

    Some social media companies have also been tightening up their rules to better protect their platforms against harmful materials.

    TikTok said last month all deepfakes or manipulated content that show realistic scenes must be labeled to indicate they’re fake or altered in some way, and that deepfakes of private figures and young people are no longer allowed. Previously, the company had barred sexually explicit content and deepfakes that mislead viewers about real-world events and cause harm.

    The gaming platform Twitch also recently updated its policies around explicit deepfake images after a popular streamer named Atrioc was discovered to have a deepfake porn website open on his browser during a livestream in late January. The site featured phony images of fellow Twitch streamers.

    Twitch already prohibited explicit deepfakes, but now showing a glimpse of such content — even if it’s intended to express outrage — “will be removed and will result in an enforcement,” the company wrote in a blog post. And intentionally promoting, creating or sharing the material is grounds for an instant ban.

    Other companies have also tried to ban deepfakes from their platforms, but keeping them off requires diligence.

    Apple and Google said recently they removed an app from their app stores that was running sexually suggestive deepfake videos of actresses to market the product. Research into deepfake porn is not prevalent, but one report released in 2019 by the AI firm DeepTrace Labs found it was almost entirely weaponized against women and the most targeted individuals were western actresses, followed by South Korean K-pop singers.

    The same app removed by Google and Apple had run ads on Meta’s platform, which includes Facebook, Instagram and Messenger. Meta spokesperson Dani Lever said in a statement the company’s policy restricts both AI-generated and non-AI adult content and it has restricted the app’s page from advertising on its platforms.

    In February, Meta, as well as adult sites like OnlyFans and Pornhub, began participating in an online tool, called Take It Down, that allows teens to report explicit images and videos of themselves from the internet. The reporting site works for regular images, and AI-generated content — which has become a growing concern for child safety groups.

    “When people ask our senior leadership what are the boulders coming down the hill that we’re worried about? The first is end-to-end encryption and what that means for child protection. And then second is AI and specifically deepfakes,” said Gavin Portnoy, a spokesperson for the National Center for Missing and Exploited Children, which operates the Take It Down tool.

    “We have not … been able to formulate a direct response yet to it,” Portnoy said.

    [ad_2]

    Source link

  • What is Discord, the chatting app tied to classified leaks?

    What is Discord, the chatting app tied to classified leaks?

    [ad_1]

    PROVIDENCE, R.I. (AP) — The chatting app Discord, which is one of the most popular ways gamers communicate online, finds itself at the center of an investigation into the leak of classified documents about the war in Ukraine.

    The investigation is unfolding as Discord makes an ambitious push to recruit more users and expand the way they use the versatile app.

    Discord said it is cooperating with law enforcement in the investigation of the leak, which is believed to have started on the site. A Massachusetts Air National Guard member reportedly posted on Discord for years about guns, games, favorite memes and, according to some who chatted with him, closely guarded U.S. secrets.

    WHAT IS DISCORD?

    Discord started in 2015 as a nerdy online hangout for gamers and had some hiccups in its quest for mainstream success. Its growth accelerated during the COVID-19 pandemic as a forum for its mostly younger users to gossip or even help each other with homework.

    “Every month, more than 150 million people come to Discord to hang out with family, friends and communities,” its co-founder and CEO, Jason Citron, said last month at a press event. “It’s become a place where they have fun and get things done together.”

    Discord users skew young — about 38% of its web users and nearly half of its Android app users are between the ages of 18 and 24, according to digital intelligence platform Similarweb. They are roughly 75% male, the research group says.

    Recently, the app has also pitched itself as a gateway to artificial intelligence tools such as Midjourney, which conjures up new imagery based on commands it’s given in a Discord chat.

    Discord announced in January that it was buying another teen-focused social app called Gas, which enables people to share online polls and uplifting compliments.

    The purchase was part of a larger push to target communities beyond gaming, according to Insider Intelligence analyst Jeremy Goldman. Goldman said Discord has also benefited from the turmoil surrounding Elon Musk’s Twitter takeover as a “not-insignificant number” of gamers put Discord handles on their Twitter profiles to show they were decamping.

    HOW DOES IT WORK?

    Discord can be accessed through desktops, smartphones or gaming consoles such as Xbox and PlayStation. It allows users to create invite-only “servers.”

    The servers, which resemble the professional messaging platform Slack, allow users to create subchannels where they can communicate over text, voice or video chats.

    Some users might have “friend servers” of several dozen people they know in real life, while others might join larger servers devoted to an online community of people interested in a specific topic.

    The company hosts nearly 21,000 servers, the vast majority of which are dedicated to gaming. Others are focused on topics like generative AI, entertainment or music.

    WHAT ABOUT THE LEAKED DOCUMENTS?

    The Massachusetts Air National Guard member was identified as Jack Teixeira, 21, who was arrested Thursday in connection with the disclosure of highly classified military documents about the Ukraine war and other top national security issues. The breach has raised questions about America’s ability to safeguard its most sensitive secrets.

    Some of the leaks are believed to have started on Discord. A chat group called “Thug Shaker Central” drew roughly two dozen enthusiasts who talked about their favorite guns and shared memes and jokes, some of them racist. The group also included a running discussion on wars that included talk of Russia’s invasion of Ukraine.

    In that discussion, one user known as “the O.G.” would for months post material that he said was classified.

    HAS DISCORD BEEN INVOLVED WITH ANY OTHER INVESTIGATIONS?

    The white gunman who killed 10 Black shoppers and workers last year at a supermarket in Buffalo, New York, shared detailed plans for the attack with a small group of people on Discord about half an hour beforehand.

    The diary, kept on a private, invite-only server, included months of racist, antisemitic entries along with step-by-step descriptions of the shooter’s assault plans, a detailed account of a reconnaissance trip he made, and hand-drawn maps of the store. He livestreamed the attack on a different platform, Twitch.

    Discord said 15 users clicked on the invitation and would have had access to his entries before the attack. There was no evidence anyone saw them before then.

    Discord said it removed the diary and banned the shooter’s account as soon as it became aware of them. The company said it also took steps to prevent content related to the attack from spreading.

    Since 2020, Discord has been part of the Global Internet Forum to Counter Terrorism, a group co-founded by tech companies such as Microsoft, Facebook and YouTube that works to tamp down the spread of mass shooting videos livestreamed by their perpetrators.

    ___

    Hadero reported from New York.

    [ad_2]

    Source link