ReportWire

Tag: iab-technology & computing

  • A new CEO won’t fix Twitter’s biggest problem | CNN Business

    A new CEO won’t fix Twitter’s biggest problem | CNN Business

    [ad_1]


    New York
    CNN
     — 

    During his six months as Twitter’s CEO and owner, Elon Musk decimated its ad business, alienated some news publications and VIP users, and plunged the platform into a constant state of chaos.

    Now, a new chief executive will be tasked with trying to turn things around.

    Musk announced on Friday that he would in the coming weeks hand the CEO role over to Linda Yaccarino, a longtime media executive and former chairman of global advertising and partnerships at NBCUniversal. Yaccarino has said little publicly so far, beyond noting her excitement to “transform this business together.”

    Twitter is in desperate need of stability from a leader. And Yaccarino brings the ad industry chops that Twitter sorely needs to lure back top advertisers and boost its business after a turbulent period. But she may struggle to address Twitter’s biggest problem: Elon Musk.

    Although Musk is handing off the CEO title — and, perhaps, trying to shed some of the accountability that comes with it — the billionaire remains firmly in charge of the company as its owner and executive chair. Musk will still be in the C-Suite as Twitter’s chief technology officer. And he continues to be Twitter’s most-followed user, meaning his controversial statements to his nearly 140 million followers could still create headaches for the company.

    In tech, the CEO is often the public face of the brand. But Musk will almost certainly continue to fill that role, with or without the title, likely to Twitter’s detriment.

    Just this week, Musk drew backlash for baselessly attacking billionaire George Soros, a frequent target for antisemitic conspiracy theories, saying the financier “hates humanity.” Musk’s Twitter also faced criticism in recent days for removing some tweets and accounts at the behest of Turkey’s government amid the country’s election; the company later said it would object to the removal requests in court.

    On Tuesday, Musk said he “didn’t care” if his controversial tweets drew the ire of Twitter advertisers or Tesla shareholders. “I’ll say what I want to say, and if the consequence of that is losing money, so be it,” Musk said in an interview with CNBC.

    “The question is: can she help balance [Musk]?” said Tim Hubbard, management professor at University of Notre Dame’s Mendoza College of Business. He added that top ad buyers are more likely to take calls from Yaccarino than from Musk, who has previously said he hates advertising.

    But “the big problem with Twitter right now is, they’re on a pathway that turns advertisers off, turns users off,” Hubbard said. “Unless there are fundamental changes at Twitter, I don’t think [the leadership change] is going to have the immediate effect that Elon is hoping it will have.”

    Twitter did not respond to a request for comment on this story.

    The Musk issue was on full display at NBCU’s ad upfront this week, which was held shortly after Yaccarino resigned from the company following rumors of her appointment as Twitter’s CEO. On stage at the event, which aimed to promote NBCU’s platforms to advertisers, a talking bear sang to audience members: “Twitter may seem like the place to begin, but Twitter just let all the crazies back in.”

    Even if Musk pulls back on his tweeting, a feat he seems constitutionally incapable of achieving, it will be no easy task for Yaccarino to revive Twitter’s advertising business — let alone expand it.

    Many major advertisers left the platform following Musk’s takeover over concerns about an uptick of hate speech, frustrations over layoffs of much of the company’s ad and safety teams and general uncertainty about the platform’s future. Just 43% of Twitter’s top 1,000 advertisers as of September, the month before Musk’s takeover, were still advertising on the platform as of last month, according to data from market intelligence firm Sensor Tower.

    But for many, leaving Twitter may not have been a particularly difficult call.

    Even in the best of times, Twitter was an also-ran in the digital ad space compared to tech giants like Meta and Google, with a smaller user base and less sophisticated ad targeting technology. And Musk’s takeover came as many advertisers have pulled back their digital ad spending across the board during a precarious moment for the economy. That could only add to the difficulty Yaccarino will face in shoring up Twitter’s business.

    Musk, for his part, has been attempting to supplement, and potentially largely replace, Twitter’s ad business with subscriptions, but it appears that only a tiny fraction of Twitter users have bought in. The selection of Yaccarino suggests a recognition on his part that the company he bet $44 billion on will continue to be reliant on ad sales for the foreseeable future.

    It’s unclear how much freedom Yaccarino will have to hire additional staff to support her likely remit to revive advertising on Twitter after Musk laid off around 80% of the company’s staff last year. And even if she is able to hire, top talent may be wary of joining Twitter after Musk upended the company’s culture and reportedly rolled back benefits like work-from-home and extended parental leave.

    “Personnel is going to be a huge challenge for her … if tech workers are looking for a stable working environment, they will probably stay away from Twitter,” Hubbard said.

    But Musk’s ongoing influence remains the biggest potential hurdle.

    Musk has said he will oversee product, technology and software and systems operations, while Yaccarino will focus on business operations. The announcement has left open the question of whether Musk will remain in charge of controversial policy decisions, many of which — including allowing users to buy blue verification checks and restoring the accounts of rule violators, including white supremacists — have threatened Twitter’s popularity with users and advertisers.

    “Cleaning up Twitter requires reversing Musk’s dangerous policy decisions, reinvesting in content moderation and enforcement, and restructuring the platform’s governance,” Jessica Gonzalez, co-CEO of media watchdog Free Press who helped found the #StopToxicTwitter campaign encouraging advertisers to avoid the platform, said in a statement.

    “Musk is setting future CEO Linda Yaccarino up to fail — as long as he continues to make the platform toxic, it will be impossible to lure back advertisers and users,” she said.”

    [ad_2]

    Source link

  • Dutch watchdog looking into alleged Tesla data breach | CNN Business

    Dutch watchdog looking into alleged Tesla data breach | CNN Business

    [ad_1]



    Reuters
     — 

    The data protection watchdog for the Netherlands said on Friday it was aware of possible Tesla data protection breaches, but it was too early for further comment.

    Germany’s Handelsblatt reported on Thursday that Elon Musk’s Tesla had allegedly failed to adequately protect data from customers, employees and business partners, citing 100 gigabytes of confidential data leaked by a whistleblower.

    “We are aware of the Handelsblatt story and we are looking into it,” said a spokesperson for the AP data watchdog in the Netherlands, where Tesla’s European headquarters is located.

    They declined all comment on whether the agency might launch or have launched an investigation, citing policy. The Dutch agency was informed by its counterpart in the German state of Brandenberg.

    Handelsblatt said Tesla notified the Dutch authorities about the breach, but the AP spokesperson said they were not aware if the company had made any representations to the agency.

    Tesla was not immediately available for comment on Friday on the Handelsblatt report, which said customer data could be found “in abundance” in a data set labelled “Tesla Files”.

    The data protection office in Brandenburg, which is home to Tesla’s European gigafactory, described the data leak as “massive”.

    “I can’t remember such a scale,” Brandenburg data protection officer Dagmar Hartge said, adding that the case had been handed to the Dutch authorities who would be responsible if the allegations led to an enforcement action.

    The Dutch authorities has several weeks to decide whether to deal with the case as part of a European procedure, she added.

    The files include tables containing more than 100,000 names of former and current employees, including the social security number of Tesla CEO Musk, along with private email addresses, phone numbers, salaries of employees, bank details of customers and secret details from production, Handelsblatt reported.

    The breach would violate the GDPR, it said.

    If such a violation was proved, Tesla could be fined up to 4% of its annual sales, which could be 3.26 billion euros.

    German union IG Metall said the revelations were “disturbing” and called on Tesla to inform employees about all data protection violations and promote a culture in which staff could raise problems and grievances openly and without fear.

    “These revelations … fit with the picture that we have gained in just under two years,” said Dirk Schulze, IG Metall incoming district manager for Berlin, Brandenburg and Saxony.

    Handelsblatt quoted a lawyer for Tesla as saying a “disgruntled former employee” had abused their access as a service technician, adding that the company would take legal action against the individual it suspected of the leak.

    Citing the leaked files, the newspaper reported about thousands of customer complaints regarding the carmaker’s driver assistance systems with around 4,000 complaints on sudden acceleration or phantom braking.

    Last month, a Reuters report showed that groups of Tesla employees privately shared via an internal messaging system sometimes highly invasive videos and images recorded by customers’ car cameras between 2019 and 2022.

    This week, Facebook parent Meta was hit with a record 1.2 billion euro ($1.3 billion) fine by its lead European Union privacy regulator over its handling of user information and given five months to stop transferring user data to the U.S.

    [ad_2]

    Source link

  • Twitter loses its top content moderation official at a key moment | CNN Business

    Twitter loses its top content moderation official at a key moment | CNN Business

    [ad_1]



    CNN
     — 

    Twitter has lost its top content moderation official just weeks before the company is set to undergo a regulatory stress test by European Union officials focused on its handling of user content, in the latest sign of turbulence at the company under owner Elon Musk.

    On Thursday, Twitter’s head of trust and safety, Ella Irwin, told Reuters she had left the company. Irwin has not addressed the reasons for her departure, but the move coincided with the company’s content moderation dispute with the Daily Wire, a conservative outlet.

    The dispute focused on the forthcoming release of a self-described documentary, “What Is a Woman?” that Twitter warned would be labeled as “hateful content” due to two instances of misgendering, according to Daily Wire CEO Jeremy Boreing. Musk intervened later Thursday, calling the content moderation decision “a mistake by many people at Twitter” and that the video would be “definitely allowed.”

    Twitter did not immediately respond to a request for comment on Irwin’s departure.

    But the sudden and unexpected vacancy at Twitter could leave the company without a key content moderation official at a sensitive moment. Later this month at Twitter’s San Francisco offices, EU officials are set to review whether the platform is likely to be compliant with a sweeping content moderation law that could eventually trigger millions of dollars in fines for Twitter if it’s found to be noncompliant.

    That law, known as the Digital Services Act, will require so-called “very large online platforms” including Twitter to abide by tough content moderation standards by as early as August. It’s far from clear whether the company can meet those requirements by the deadline, and recent developments at Twitter seem to have further alarmed EU regulators in that respect.

    For months, as Musk has increasingly welcomed more incendiary speech onto the platform Twitter had previously restricted, EU officials have been reminding Twitter of its content moderation obligations under the DSA. The warnings have also come amid mass layoffs at the company that have eliminated entire teams, including much of its content moderation staff.

    Last month, Twitter pulled out of the European Union’s code of conduct on disinformation, a series of voluntary commitments to combat mis- and disinformation that the EU has said would be considered as part of any evaluation of a platform’s compliance with the overall Digital Services Act (DSA).

    Although Twitter said it was “committed to fully complying with the Digital Services Act” and would meet its DSA obligations with respect to misinformation “in a manner that reflects Twitter’s unique service,” the company told EU officials “we feel we have no alternative” but to withdraw from the code.

    The announcement prompted swift backlash from Thierry Breton, a top EU commissioner and digital regulator, who appeared to regard Twitter’s decision as an attempt to evade responsibility.

    “Obligations remain,” Breton said. “You can run but you can’t hide.”

    Irwin’s departure could undercut the EU’s confidence further. Without a trust and safety head who would otherwise be expected to attend the EU stress test, Twitter’s ability to effectively respond to the evaluation may be constrained. A spokesperson for the European Commission didn’t immediately respond to a request for comment.

    On Friday, The Wall Street Journal reported that Twitter’s head of brand safety and ad quality also departed the company this week.

    All of this could be problematic for Twitter and Musk in the long run – and could also create an added headache for Linda Yaccarino just as she takes over as the company’s new CEO.

    Companies that fail to abide by the DSA risk fines of up to 6% of their global annual revenue. For Twitter, which is already struggling to regain its financial footing amid significant debt and an advertiser backlash, that’s a cost it can ill afford.

    [ad_2]

    Source link

  • Silicon Valley escalates the battle over returning to the office | CNN Business

    Silicon Valley escalates the battle over returning to the office | CNN Business

    [ad_1]



    CNN
     — 

    Three years after Silicon Valley companies led the charge for embracing remote work in the early days of the pandemic, the tech industry is now escalating the fight to bring employees back into the office -— and igniting tensions with staff in the process.

    Google, which has long been a bellwether for workplace policies in the tech industry and beyond, frustrated some employees this week by announcing plans to begin more strictly enforcing its policy that requires workers in-office at least three days a week. The updated policy includes tracking office badge attendance and possibly factoring it into performance reviews, according to CNBC, citing internal memos.

    “Overnight, workers’ professionalism has been disregarded in favor of ambiguous attendance tracking practices tied to our performance evaluations,” Chris Schmidt, a software engineer at Google and member of the grassroots Alphabet Workers Union, told CNN in a statement. “The practical application of this new policy will be needless confusion amongst workers and a disregard for our various life circumstances.”

    In a statement, Ryan Lamont, a Google spokesperson, told CNN that its policy of working in the office three days a week is “going well, and we want to see Googlers connecting and collaborating in-person, so we’re limiting remote work to exception only.”

    Lamont said that company leaders can see reports showing how their teams are adopting the hybrid work model, including “aggregated data” on badge swipes. He added that now that the company is more than a year into its hybrid model, “we’re formally integrating this approach into all of our workplace policies.”

    Google isn’t alone in facing pushback from employees. Other tech companies are also grappling with how best to compel workers to come into the office after they’ve grown accustomed to greater flexibility. The tug-of-war is compounded by the fact that tech companies have laid off tens of thousands of employees over the past year, leveling a major blow to employee morale.

    At Amazon, tensions boiled over last week as hundreds of office workers staged a walkout to call attention to their grievances, including the three-day return-to-office mandate that was implemented in May.

    A current Amazon worker who spoke at the walkout said that she started an internal Slack channel called “remote advocacy” because she wanted a space where workers could discuss how the new return-to-office policy would impact their lives.

    “Before I realized what was happening, that channel had 33,000 people in it,” the worker, who identified only as Pamela, said to the crowd at the event. Pamela called the Slack channel advocating for remote work “the largest concrete expression of employee dissatisfaction in our entire company history.”

    But the employee criticism isn’t stopping tech companies, who have spent billions on sprawling campuses over the years and often preach the value of serendipitous workplace interactions, from moving forward with their return to office policies.

    In response to the walkout, Amazon previously told CNN it may “take time” for some workers to adjust to being in the office more days. But the company also said it’s “happy with how the first month of having more people back in the office has been” and touted the extra “energy, collaboration, and connections happening” in the office.

    Facebook-parent Meta similarly doubled down last week on its push to get workers in the office, warning that employees currently assigned to an office must return to in-person work three days a week starting this September. (A Meta spokesperson told CNN the updated policy was not set in stone, and employees designated as remote workers will be allowed to keep their remote status).

    At least one tech company is taking a gentler approach.

    Salesforce is trying to lure staff into offices by offering to donate $10 to a local charity for each day an employee comes in from June 12 to June 23, according to an internal Slack message reported on by Fortune.

    A Salesforce spokesperson told CNN: “Giving back is deeply embedded in everything we do, and we’re proud to introduce Connect for Good to encourage employees to help us raise $1 Million+ for local nonprofits.”

    But it might take more than temporary charitable contributions to convince some workers it’s worthwhile to return. Schmidt, the software engineer at Google, said that even if you go into the office, there’s no guarantee you’ll have people on your team to work with or even a desk to sit at.

    “Many teams are distributed, and for some of us there may not be anyone to collaborate with in our physical office locations,” Schmidt said. “Currently, New York City workers do not even have enough desks and conference rooms for workers to use comfortably.”

    “A one size fits all policy does not address these circumstances,” he added. “We deserve a voice in shaping the policies that impact our lives to establish clear, transparent and fair working conditions for all of us.”

    [ad_2]

    Source link

  • Forget about the AI apocalypse. The real dangers are already here | CNN Business

    Forget about the AI apocalypse. The real dangers are already here | CNN Business

    [ad_1]



    CNN
     — 

    Two weeks after members of Congress questioned OpenAI CEO Sam Altman about the potential for artificial intelligence tools to spread misinformation, disrupt elections and displace jobs, he and others in the industry went public with a much more frightening possibility: an AI apocalypse.

    Altman, whose company is behind the viral chatbot tool ChatGPT, joined Google DeepMind CEO Demis Hassabis, Microsoft’s CTO Kevin Scott and dozens of other AI researchers and business leaders in signing a one-sentence letter last month stating: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

    The stark warning was widely covered in the press, with some suggesting it showed the need to take such apocalyptic scenarios more seriously. But it also highlights an important dynamic in Silicon Valley right now: Top executives at some of the biggest tech companies are simultaneously telling the public that AI has the potential to bring about human extinction while also racing to invest in and deploy this technology into products that reach billions of people.

    The dynamic has played out elsewhere recently, too. Tesla CEO Elon Musk, for example, said in a TV interview in April that AI could lead to “civilization destruction.” But he still remains deeply involved in the technology through investments across his sprawling business empire and has said he wants to create a rival to the AI offerings by Microsoft and Google.

    Some AI industry experts say that focusing attention on far-off scenarios may distract from the more immediate harms that a new generation of powerful AI tools can cause to people and communities, including spreading misinformation, perpetuating biases and enabling discrimination in various services.

    “Motives seemed to be mixed,” Gary Marcus, an AI researcher and New York University professor emeritus who testified before lawmakers alongside Altman last month, told CNN. Some of the execs are likely “genuinely worried about what they have unleashed,” he said, but others may be trying to focus attention on “abstract possibilities to detract from the more immediate possibilities.”

    Representatives for Google and OpenAI did not immediately respond to a request for comment. In a statement, a Microsoft spokesperson said: “We are optimistic about the future of AI, and we think AI advances will solve many more challenges than they present, but we have also been consistent in our belief that when you create technologies that can change the world, you must also ensure that the technology is used responsibly.”

    For Marcus, a self-described critic of AI hype, “the biggest immediate threat from AI is the threat to democracy from the wholesale production of compelling misinformation.”

    Generative AI tools like OpenAI’s ChatGPT and Dall-E are trained on vast troves of data online to create compelling written work and images in response to user prompts. With these tools, for example, one could quickly mimic the style or likeness of public figures in an attempt to create disinformation campaigns.

    In his testimony before Congress, Altman also said the potential for AI to be used to manipulate voters and target disinformation were among “my areas of greatest concern.”

    Even in more ordinary use cases, however, there are concerns. The same tools have been called out for offering wrong answers to user prompts, outright “hallucinating” responses and potentially perpetuating racial and gender biases.

    Gary Marcus, professor emeritus at New York University, right, listens to Sam Altman, chief executive officer and co-founder of OpenAI, speak during a Senate Judiciary Subcommittee hearing in Washington, DC, US, on Tuesday, May 16, 2023. Congress is debating the potential and pitfalls of artificial intelligence as products like ChatGPT raise questions about the future of creative industries and the ability to tell fact from fiction.

    Emily Bender, a professor at the University of Washington and director of its Computational Linguistics Laboratory, told CNN said some companies may want to divert attention from the bias baked into their data and also from concerning claims about how their systems are trained.

    Bender cited intellectual property concerns with some of the data these systems are trained on as well as allegations of companies outsourcing the work of going through some of the worst parts of the training data to low-paid workers abroad.

    “If the public and the regulators can be focused on these imaginary science fiction scenarios, then maybe these companies can get away with the data theft and exploitative practices for longer,” Bender told CNN.

    Regulators may be the real intended audience for the tech industry’s doomsday messaging.

    As Bender puts it, execs are essentially saying: “‘This stuff is very, very dangerous, and we’re the only ones who understand how to rein it in.’”

    Judging from Altman’s appearance before Congress, this strategy might work. Altman appeared to win over Washington by echoing lawmakers’ concerns about AI — a technology that many in Congress are still trying to understand — and offering suggestions for how to address it.

    This approach to regulation would be “hugely problematic,” Bender said. It could give the industry influence over the regulators tasked with holding it accountable and also leave out the voices and input of other people and communities experiencing negative impacts of this technology.

    “If the regulators kind of orient towards the people who are building and selling the technology as the only ones who could possibly understand this, and therefore can possibly inform how regulation should work, we’re really going to miss out,” Bender said.

    Bender said she tries, at every opportunity, to tell people “these things seem much smarter than they are.” As she put it, this is because “we are as smart as we are” and the way that we make sense of language, including responses from AI, “is actually by imagining a mind behind it.”

    Ultimately, Bender put forward a simple question for the tech industry on AI: “If they honestly believe that this could be bringing about human extinction, then why not just stop?”

    [ad_2]

    Source link

  • Schumer outlines plan for how Senate will regulate AI | CNN Business

    Schumer outlines plan for how Senate will regulate AI | CNN Business

    [ad_1]



    CNN
     — 

    Senate Majority Leader Chuck Schumer announced a broad, open-ended plan for regulating artificial intelligence on Wednesday, describing AI as an unprecedented challenge for Congress that effectively has policymakers “starting from scratch.”

    The plan, Schumer said at a speech in Washington, will begin with at least nine panels to identify and discuss the hardest questions that regulations on AI will have to answer, including how to protect workers, national security and copyright and to defend against “doomsday scenarios.” The panels will be composed of experts from industry, academia and civil society, with the first sessions taking place in September, Schumer said.

    The Senate will then turn to committee chairs and other vocal lawmakers on AI legislation to develop bills reflecting the panel discussions, Schumer added, arguing that the resulting US solution could leapfrog existing regulatory proposals from around the world.

    “If we can put this together in a very serious way, I think the rest of the world will follow and we can set the direction of how we ought to go in AI, because I don’t think any of the existing proposals have captured that imagination,” Schumer said, reflecting on other recent proposals such as the European Union’s draft AI Act, which last week was approved by the European Parliament.

    The speech represents Schumer’s most definitive remarks to date on a problem that has dogged Congress for months amid the wide embrace of tools such as ChatGPT: How to catch up, or get ahead, on policymaking for a technology that is already in the hands of millions of people and evolving rapidly.

    In the wake of ChatGPT’s viral success, Silicon Valley has raced to develop and deploy a new crop of generative AI tools that can produce images and writing almost instantly, with the potential to change how people work, shop and interact with each other. But these same tools have also raised concerns for their potential to make factual errors, spread misinformation and perpetuate biases, among other issues.

    In contrast to the fast pace of AI advancements, Schumer has stressed the importance of a deliberate approach, focusing on getting lawmakers acquainted with the basic facts of the technology and the issues it raises before seeking to legislate. He and three other colleagues began last week by convening the first in a series of closed-door briefings on AI for senators that is expected to run through the summer.

    In his remarks Wednesday, Schumer appeared to acknowledge criticism of his pace.

    “I know many of you have spent months calling on us to act,” he said. “I hear you. I hear you loud and clear.”

    But he described AI as a novel issue for which Congress lacks a guide.

    “It’s not like labor, or healthcare, or defense, where Congress has had a long history we can work off of,” he said. “Experts aren’t even sure which questions policymakers should be asking. In many ways, we’re starting from scratch.”

    Schumer described his plan as laying “a foundation for AI policy” that will do “years of work in a matter of months.”

    To guide that process, Schumer expanded on a set of principles he first announced in April. Formally unveiling the framework on Wednesday, Schumer said any legislation on AI should be geared toward facilitating innovation before addressing risks to national security or democratic governance.

    “Innovation first,” Schumer said, “but with security, accountability, [democratic] foundations and explainability.”

    The last two pillars of his framework, Schumer said, may be among the most important, as unrestricted artificial intelligence could undermine electoral processes or make it impossible to critically evaluate an AI’s claims.

    Schumer’s remarks were restrained in calling for any specific proposals. At one point, he acknowledged that a consensus may even emerge that recommends against major government intervention on the technology.

    But he was clear on one point: “We do — we do — need to require companies to develop a system where in simple and understandable terms users understand why the system produced a particular answer, and where that answer came from.”

    The Senate may still be a long way off from unveiling any comprehensive proposal, however. Schumer predicted that the process is likely to take longer than weeks but shorter than years.

    “Months would be the proper timeline,” he said.

    [ad_2]

    Source link

  • Meta takes aim at Twitter with new Threads app | CNN Business

    Meta takes aim at Twitter with new Threads app | CNN Business

    [ad_1]


    London
    CNN
     — 

    The rivalry between Mark Zuckerberg and Elon Musk has just kicked up a notch.

    Zuckerberg’s Meta, which owns Facebook and Instagram, has teased a new app that is set to take on Twitter by offering a rival space for real-time conversations online.

    The app is called Threads and it is expected to go live Thursday, according to a listing in the App Store. The app appears to have many similarities to Twitter — the App Store description emphasizes conversations, as well as the potential to build a following and connect with like-minded people.

    “Threads is where communities come together to discuss everything from the topics you care about today to what’ll be trending tomorrow,” it reads.

    “Whatever it is you’re interested in, you can follow and connect directly with your favorite creators and others who love the same things — or build a loyal following of your own to share your ideas, opinions and creativity with the world.”

    The move by Meta comes amid a fresh bout of turmoil at Twitter, which experienced an outage over the weekend, followed by an announcement that the site had imposed temporary limits on how many tweets its users are able to read while using the app.

    Musk, the platform’s billionaire owner, said these restrictions had been applied “to address extreme levels of data scraping and system manipulation.”

    Commenting on the launch of Threads Monday, Musk tweeted: “Thank goodness they’re so sanely run,” parroting reported comments by Meta executives that appeared to take a jab at Musk’s erratic behavior.

    Since taking Twitter private in October, Musk has turned the social media platform on its head, alienating advertisers and some of its highest-profile users.

    He is now looking for ways to return the platform to growth. Twitter announced Monday that users would soon need to pay for TweetDeck, a tool that allows people to organize and easily monitor the accounts they follow.

    Twitter is also attempting to encroach on Meta’s domain.

    In May, Twitter added encrypted messaging and said calls would follow, developments that could allow the platform to compete with Facebook Messenger and WhatsApp, also owned by Meta.

    Musk and Zuckerberg’s rivalry could soon extend beyond business and into the ring. Last month, the two men discussed the possibility of a cage fight, with the Las Vegas arena that hosts the Ultimate Fighting Championship seemingly the favorite location for the match.

    [ad_2]

    Source link

  • Tax prep companies shared private taxpayer data with Google and Meta for years, congressional probe finds | CNN Business

    Tax prep companies shared private taxpayer data with Google and Meta for years, congressional probe finds | CNN Business

    [ad_1]



    CNN
     — 

    Some of America’s largest tax-prep companies have spent years sharing Americans’ sensitive financial data with tech titans including Meta and Google in a potential violation of federal law — data that in some cases was misused for targeted advertising, according to a seven-month congressional investigation.

    The report highlights what legal experts described to CNN as a “five-alarm fire” for taxpayer privacy that could lead to government and private lawsuits, criminal penalties or perhaps even a “mortal blow” for some industry giants involved in the probe including TaxSlayer, H&R Block and TaxAct.

    Using visitor tracking technology embedded on their websites, the three tax-prep companies allegedly sent tens of millions of Americans’ personal information to the tech industry without consent or appropriate disclosures, according to the congressional report reviewed by CNN.

    Beyond ordinary personal data such as people’s names, phone numbers and email addresses, the list of information shared also included taxpayer data — details about people’s filing status, adjusted gross income, the size of their tax refunds and even information about the buttons and text fields they clicked on while filling out their tax forms, which could reveal what tax breaks they may have claimed or which government programs they use, according to the report.

    The report, which drew on congressional interviews and written testimony from Meta, Google and the tax-prep companies, also found that every taxpayer who used TaxAct’s IRS Free File service while the tracking was enabled would have had their information shared with the tech companies. Some of the tax-prep companies still do not know whether the data they shared continues to be held by the tech platforms, the report said.

    “On a scale from one to 10, this is a 15,” said David Vladeck, a law professor at Georgetown University and a former consumer protection chief at the Federal Trade Commission, the country’s top privacy watchdog. “This is as great as any privacy breach that I’ve seen other than exploiting kids. This is a five-alarm fire, if what we know about this so far is true.”

    It is also an example, Vladeck said, of why the United States needs federal legislation guaranteeing every American a basic right to data privacy — an issue that has languished in Congress for years despite electronic data becoming an ever-larger part of the global economy.

    The congressional findings represent the latest claims of wrongdoing to hit the embattled tax-prep industry after a report last year by the investigative journalism outlet The Markup highlighted the tracking practice.

    Wednesday’s bombshell report adds to those earlier revelations by identifying a previously unreported category of data that was allegedly being collected and shared: the webpage titles in online tax software that can reveal what tax forms users have accessed, said an aide to Democratic Sen. Elizabeth Warren, who helped lead the congressional probe. For example, taxpayers who entered information about their college savings contributions or rental income may have done so on webpages bearing titles reflecting that information, which would then have been shared with the tech companies, the aide said.

    During the probe, Meta told investigators it used the taxpayer data it received to target third-party ads to users of its platform and to train its artificial intelligence algorithms, the report said. The Warren aide told CNN it was unclear whether Meta knew it was inappropriately using taxpayer data at the time. A Meta spokesperson said the company instructs its partners not to use its tools to share sensitive information and that Meta’s systems are “designed to filter out potentially sensitive data it is able to detect.”

    The technology behind the data collection, known as a tracking pixel, is commonly used across the entire internet. A small snippet of code that website owners can insert onto their sites, tracking pixels gather information that can help companies, including but not limited to Meta and Google, understand the behavior or interests of website visitors.

    Because of the tracking technology used by TaxAct, TaxSlayer and H&R Block, “every single taxpayer who used their websites to file their taxes could have had at least some of their data shared,” the report said.

    The tax-prep companies at the center of the investigation told lawmakers the collected data had been scrambled to help protect privacy, according to the report. But the report also said some of the tax-prep firms themselves were not fully aware of how much information was being exposed to the tech platforms, and the report cited past FTC research concluding that even “anonymized” data can be easily reverse-engineered to identify a person.

    The pixels’ use in a taxpayer context resulted in the “reckless” sharing of legally protected data that could put taxpayers at risk, according to the report by Warren and her Democratic colleagues Sens. Ron Wyden; Richard Blumenthal; Tammy Duckworth; and Sheldon Whitehouse; Sen. Bernie Sanders, an independent who caucuses with Democrats; and Democratic Rep. Katie Porter.

    The FTC, the Internal Revenue Service, the Justice Department and the Treasury Inspector General for Tax Administration “should fully investigate this matter and prosecute any company or individuals who violated the law,” the lawmakers wrote in a letter dated Tuesday to the agencies and obtained by CNN. The FTC and DOJ declined to comment; the IRS and TIGTA didn’t immediately respond to a request for comment.

    In a statement, H&R Block said it takes client privacy “very seriously, and we have taken steps to prevent the sharing of information via pixels.” Wednesday’s report said H&R Block had testified to using the tracking technology for “at least a couple of years.”

    TaxAct and TaxSlayer didn’t immediately respond to a request for comment. The report said TaxAct had been using Meta’s tools since 2018 and Google’s since about 2014, while TaxSlayer began using Meta’s tools in 2018 and Google’s in 2011. The investigation found that all three tax-prep companies had discontinued their use of Meta’s pixel after The Markup’s report last November.

    Intuit, the maker of TurboTax, received an initial inquiry letter from the lawmakers in December but was not a focus of Wednesday’s report because the company did not use tracking pixels to the same extent, the investigation found.

    Tax preparation firms have faced mounting scrutiny in recent years amid reports that many have turned to data harvesting as a business model and that the largest among them have spent millions lobbying against legislation that could make it easier for Americans to file their tax returns. An IRS report this year found that 72% of Americans would be interested in using a free, electronic tax filing service if it were provided by the agency as an alternative to private online filing services. The IRS plans to launch a pilot version of that service to a limited number of taxpayers in the 2024 tax filing season.

    Google told CNN it prohibits business customers from uploading to its platform sensitive data that could be traced back to a person.

    “We have strict policies and technical features that prohibit Google Analytics customers from collecting data that could be used to identify an individual,” a Google spokesperson said. “Site owners — not Google — are in control of what information they collect and must inform their users of how it will be used. Additionally, Google has strict policies against advertising to people based on sensitive information.”

    Wednesday’s report focuses more heavily on Meta’s use of taxpayer data, the Warren aide told CNN, because Google did not appear to have used the information for its own commercial purposes as overtly as Meta and the investigation was unable to fully determine whether Google may have used the data for other applications.

    The allegations could nevertheless create extensive legal risk for both the tech companies as well as the tax-preparation firms, according to tax and privacy legal experts.

    The tax-prep companies could face billions in fines under US tax law if the federal government decides to sue, said Steven Rosenthal, a senior fellow at the Urban-Brookings Tax Policy Center. In addition, the US government could seek criminal penalties.

    “The scope of ‘taxpayer information’ is broad by design,” Rosenthal said, adding that tax-prep companies can be sued for “knowingly” or “recklessly” leaking that information. “The companies shouldn’t be sharing it in a way that some third party could obtain it.”

    Theoretically, he said, the tax code also affords individual taxpayers the right to file private lawsuits against the tax-prep companies. But most if not all of those firms require customers to submit to mandatory arbitration that could realistically make bringing a private claim more challenging, said the Warren aide.

    Apart from the tax code, both the tech giants as well as the tax-prep firms could also face civil liability from the FTC — which can police data breaches and hold companies accountable for their commitments to user privacy — and potentially from state governments that have their own privacy laws on the books, said Vladeck.

    Depending on the strength of the allegations, the tax-prep companies could quickly be forced into a binding settlement, said a former FTC official who requested anonymity in order to speak more freely.

    “If the facts are really strong, these companies would probably rather settle than go to court. This is very embarrassing,” the former official said. “It could be a mortal blow to the tax prep companies.”

    [ad_2]

    Source link

  • Leading AI companies commit to outside testing of AI systems and other safety commitments | CNN Politics

    Leading AI companies commit to outside testing of AI systems and other safety commitments | CNN Politics

    [ad_1]



    CNN
     — 

    Microsoft, Google and other leading artificial intelligence companies committed Friday to put new AI systems through outside testing before they are publicly released and to clearly label AI-generated content, the White House announced.

    The pledges are part of a series of voluntary commitments agreed to by the White House and seven leading AI companies – which also include Amazon, Meta, OpenAI, Anthropic and Inflection – aimed at making AI systems and products safer and more trustworthy while Congress and the White House develop more comprehensive regulations to govern the rapidly growing industry. President Joe Biden met with top executives from all seven companies at the White House on Friday.

    In a speech Friday, Biden called the companies commitments “real and concrete,” adding they will help fulfill their “fundamental obligations to Americans to develop safe, secure and trustworthy technologies that benefit society and uphold our values and our shared values.”

    “We’ll see more technology change in the next 10 years, or even in the next few years, than we’ve seen in the last 50 years. That has been an astounding revelation,” Biden said.

    White House officials acknowledge that some of the companies have already enacted some of the commitments but argue they will as a whole raise “the standards for safety, security and trust of AI” and will serve as a “bridge to regulation.”

    “It’s a first step, it’s a bridge to where we need to go,” White House deputy chief of staff Bruce Reed, who has been managing the AI policy process, said in an interview. “It will help industry and government develop the capacities to make sure that AI is safe and secure. And we pushed to move so quickly because this technology is moving farther and faster than anything we’ve seen before.”

    While most of the companies already conduct internal “red-teaming” exercises, the commitments will mark the first time they have all committed to allow outside experts to test their systems before they are released to the public. A red team exercise is designed to simulate what could go wrong with a given technology – such as a cyberattack or its potential to be used by malicious actors – and allows companies to proactively identify shortcomings and prevent negative outcomes.

    Reed said the external red-teaming “will help pave the way for government oversight and regulation,” potentially laying the groundwork for that outside testing to be carried out by a government regulator or licenser.

    The commitments could also lead to widespread watermarking of AI-generated audio and visual content with the aim of combating fraud and misinformation.

    The companies also committed to investing in cybersecurity and “insider threat safeguards,” in particular to protect AI model weights, which are essentially the knowledge base upon which AI systems rely; creating a robust mechanism for third parties to report system vulnerabilities; prioritizing research on the societal risks of AI; and developing and deploying AI systems “to help address society’s greatest challenges,” according to the White House.

    Asked by CNN’s Jake Tapper Friday about worries he has when it comes to AI, Microsoft Vice Chair and President Brad Smith pointed to “what people, bad actors, individuals or countries will do” with the technology.

    “That they’ll use it to undermine our elections, that they will use it to seek to break in to our computer networks. You know, that they’ll use it in ways that will undermine the security of our jobs,” he said.

    But, Smith argued, “the best way to solve these problems is to focus on them, to understand them, to bring people together, and to solve them. And the interesting thing about AI, in my opinion, is that when we do that, and we are determined to do that, we can use AI to defend against these problems far more effectively than we can today.”

    Pressed by Tapper about AI and compensation concerns listed in a recent letter signed by thousands of authors, Smith said: “I don’t want it to undermine anybody’s ability to make a living by creating, by writing. That is the balance that we should all want to strike.”

    All of the commitments are voluntary and White House officials acknowledged that there is no enforcement mechanism to ensure the companies stick to the commitments, some of which also lack specificity.

    Common Sense Media, a child internet-safety organization, commended the White House for taking steps to establish AI guardrails, but warned that “history would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations.”

    “If we’ve learned anything from the last decade and the complete mismanagement of social media governance, it’s that many companies offer a lot of lip service,” Common Sense Media CEO James Steyer said in a statement. “And then they prioritize their profits to such an extent that they will not hold themselves accountable for how their products impact the American people, particularly children and families.”

    The federal government’s failure to regulate social media companies at their inception – and the resistance from those companies – has loomed large for White House officials as they have begun crafting potential AI regulations and executive actions in recent months.

    “The main thing we stressed throughout the discussions with the companies was that we should make this as robust as possible,” Reed said. “The tech industry made a mistake in warding off any kind of oversight, legislation and regulation a decade ago and I think that AI is progressing even more rapidly than that and it’s important for this bridge to regulation to be a sturdy one.”

    The commitments were crafted during a monthslong back-and-forth between the AI companies and the White House that began in May when a group of AI executives came to the White House to meet with Biden, Vice President Kamala Harris and White House officials. The White House also sought input from non-industry AI safety and ethics experts.

    White House officials are working to move beyond voluntary commitments, readying a series of executive actions, the first of which is expected to be unveiled later this summer. Officials are also working closely with lawmakers on Capitol Hill to develop more comprehensive legislation to regulate AI.

    “This is a serious responsibility. We have to get it right. There’s an enormous, enormous potential upside as well,” Biden said.

    In the meantime, White House officials say the companies will “immediately” begin implementing the voluntary commitments and hope other companies sign on in the future.

    “We expect that other companies will see how they also have an obligation to live up to the standards of safety, security and trust. And they may choose – and we would welcome them choosing – joining these commitments,” a White House official said.

    This story has been updated with additional details.

    [ad_2]

    Source link

  • WhatsApp unveils new video messaging feature | CNN Business

    WhatsApp unveils new video messaging feature | CNN Business

    [ad_1]



    CNN
     — 

    WhatsApp will now let you record and send video clips directly in the messaging app, the Meta-owned platform announced this week.

    The instant video messages can be up to 60 seconds long, and are similarly protected with the app’s end-to-end encryption service.

    “We think these will be a fun way to share moments with all the emotion that comes from video, whether it’s wishing someone a happy birthday, laughing at a joke, or bringing good news,” the company said Thursday in a blog post.

    The new feature will be similar to sending a voice message on the platform, the company added, and there will also be a way to record the video hands-free.

    The company said the new update has begun rolling out on the app and will be available to everyone in the coming weeks.

    Earlier this year, WhatsApp rolled out an update that lets users edit messages in the app (as long as it’s within 15 minutes after sending).

    The latest product update for WhatsApp comes on the heels of a better-than-expected earnings report from Meta. The company said Wednesday that revenue surged 11% year-over-year to $32 billion for its quarter ending in June, as CEO Mark Zuckerberg’s “year of efficiency” appears to be paying off for the social media giant.

    After a bruising 2022, shares of Meta stock have jumped more than 150% in 2023.

    [ad_2]

    Source link

  • Hot box detectors didn’t stop the East Palestine derailment. Research shows another technology might have | CNN

    Hot box detectors didn’t stop the East Palestine derailment. Research shows another technology might have | CNN

    [ad_1]



    CNN
     — 

    A failing, flaming wheel bearing doomed the rail car that derailed and created a catastrophe in East Palestine earlier this month, but researchers have offered a solution to the faulty detectors that experts say could have averted the disaster unfolding in the small Ohio town.

    These wayside hot box detectors, stationed on rail tracks every 20 miles or so, use infrared sensors to record the temperatures of railroad bearings as trains pass by. If they sense an overheated bearing, the detectors trigger an alarm, which notifies the train crew they should stop and inspect the rail car for a potential failure.

    So why did these detectors miss a bearing failure before the catastrophe?

    An investigation into hot box detectors published in 2019 and funded by the Department of Transportation found that one “major shortcoming” of these detectors is that they can’t distinguish between healthy and defective bearings, and temperature alone is not a good indicator of bearing health.

    “Temperature is reactive in nature, meaning by the time you’re sensing a high temperature in a bearing, it’s too late, the bearing is already in its final stages of failure,” Constantine Tarawneh, director of the University Transportation Center for Railways Safety (UTCRS) and lead investigator of the study, told CNN.

    As part of the investigation, the UTCRS researchers developed a new system to better detect a bearing issue long before a catastrophic failure. The key: measuring the bearing’s vibration in addition to its temperature and load.

    The vibration of a failing bearing, Tarawneh says, often begins intensifying thousands of miles before a catastrophic failure. So his team created sensors that can be placed on board each rail car, near the bearing, to continuously monitor its vibration throughout its travels.

    “If you put an accelerometer on a bearing and you’re monitoring the vibration levels, the minute a defect happens in the bearing, the accelerometer will sense an increase in vibration, and that could be, in many cases, up to 100,000 miles before the bearing actually fails,” he said.

    Tarawneh, who argues the technology should be federally mandated, says had it been on board Norfolk Southern’s line it would have prevented the derailment in East Palestine.

    “It would have detected the problem months before this happened,” he said. “There wouldn’t have been a derailment.”

    A preliminary report from the East Palestine derailment, released Thursday by the National Transportation Safety Board, found hot box sensors detected that a wheel bearing was heating up miles before it eventually failed and caused the train to derail. But the detectors didn’t alert the crew until it was too late.

    The bearing, according to the report, was 38 degrees above ambient temperature when it passed through a hot box 30 miles outside East Palestine. No alert went out, the NTSB said.

    Ten miles later, the next hot box detected that the bearing had reached 103 degrees above ambient. Video of the train recorded in that area shows sparks and flames around the rail car. Still, no alert went to the crew.

    It wasn’t until a further 20 miles down the tracks, as the train reached East Palestine, that a hot box detector recorded the bearing’s temperature at 253 degrees above ambient and sent an alarm message instructing the crew to slow and stop the train to inspect a hot axle, the report said.

    The crew slowed the train, the report added, leading to an automatic emergency brake application. After the train stopped, the crew observed the derailment.

    The reason those first two hot box readings didn’t trigger an alert, the report said, is because Norfolk Southern’s policy is to only stop and inspect a bearing after it has reached 170 degrees above ambient temperature. The NTSB is planning to review Norfolk Southern’s use of wayside hot box detectors, including spacing and the temperature threshold that determines when crews are alerted.

    “Had there been a detector earlier, that derailment may not have occurred,” said NTSB Chair Jennifer Homendy at a Thursday press conference.

    In a statement responding to the NTSB report, Norfolk Southern stressed that its hot box detectors were operating as designed, and that those detectors trigger an alarm at a temperature threshold that is “among the lowest in the rail industry.” CNN has reached out to Norfolk Southern for comment on vibration sensor technology.

    Hot box detectors are unregulated, so companies like Norfolk Southern can turn them on and off at their own discretion and choose the temperature threshold at which crews receive an alert.

    There are several causes for overheated roller bearings, including fatigue cracking, water damage, mechanical damaging, a loose bearing or a wheel defect, according to the NTSB, and the agency says they’re investigating what caused the failure in East Palestine.

    “Roller bearings fail, but it is absolutely critical for problems to be identified and addressed early so these aren’t run until failure,” Homendy said. “You cannot wait until they’ve failed. Problems need to be identified early, so something catastrophic like this does not occur again.”

    Hum Industrial Technology, a rail car telematics company, has licensed the vibration sensor technology created by Tarawneh and his team. And it has launched pilot programs with several rail companies. But at this point, those sensors are on very few trains operating in the United States, which Tarawneh largely blames on the cost of retrofitting and monitoring cars and what he sees as companies prioritizing profit.

    It’s not clear exactly what it would cost to retrofit every train car in operation with sensors today, but Hum Industrial Technology stressed that it would cost less to put a sensor on a bearing than to replace a bearing.

    “They see it as, well, why should we do it if it’s not mandated?” Tarawneh said. “It’s like a lot of people are saying, ‘well, I’m willing to take the risk. It’s not that many derailments per year.’”

    But Steve Ditmeyer, a former Federal Railroad Administration official, says equipping every rail car with on board sensors may not be financially feasible.

    “What they’re proposing will work, but it’s very, very expensive,” Ditmeyer told CNN. “And one does have to take cost into consideration.”

    It would take more than 12 million on board sensors, according to Tarawneh, to fully equip the roughly 1.6 million rail cars in service across North America.

    Ditmeyer says railroads should invest more heavily in wayside acoustic bearing detectors, which sit along the tracks – much like hot box detectors – and monitor the sound of passing trains. They listen for noise that indicates a bearing failure well before a potential catastrophe.

    As of 2019, only 39 acoustic bearing detectors were in use across North America compared to more than 6,000 hot box detectors, according to a 2019 DOT report.

    “They are the only way that I can think of that would have prevented the accident by having caught a failing bearing earlier,” Ditmeyer said.

    [ad_2]

    Source link