ReportWire

Tag: social media

  • 8 Expert Tips for Optimizing Your Professional Instagram Profile | Entrepreneur

    8 Expert Tips for Optimizing Your Professional Instagram Profile | Entrepreneur

    Opinions expressed by Entrepreneur contributors are their own.

    In today’s digital age, having a professional Instagram profile is essential for businesses to establish a strong online presence. With over 1 billion active users, Instagram offers businesses a powerful platform to engage with their target audience, increase brand awareness and drive sales.

    Instagram is one of the most popular platforms for businesses to showcase their products and services. However, with so much competition on the platform, it’s important to make sure your profile stands out.

    As a social media agency owner who tracks the progress of over 30 professional profiles of business owners, I have noticed current trends and tips that can help businesses improve their Instagram presence. Here are my insights on the latest trends and provide tips on how to make your Instagram profile the best it can be, so you can not only improve your image but also increase your revenue.

    Related: Why Instagram Is Every Entrepreneur’s Most Powerful Tool

    1. Get the blue check

    The blue verification mark on Instagram is a sign of authenticity and credibility. It helps users identify legitimate accounts and distinguishes them from imitators or fan accounts. Instagram has now made it available through a subscription to get the blue mark. For $15 per month, you can enjoy the benefits of being a verified account.

    Not only is it prestigious, but having a blue checkmark on your account will also help you to secure it. Recently, many scammers have created duplicates of people’s accounts and sent messages to all the people they follow, asking for money or sending links with dangerous content. A blue checkmark will make you feel safe knowing that everyone knows it’s actually you.

    2. Optimize your bio

    Your bio is your chance to make a great first impression on your audience. It should be informative, engaging, and reflect your brand’s personality. Don’t forget to add keywords to your profile name, location and link to your business. This will make it easier for users to find and connect with you.

    If you’re using Instagram to sell products or services, make sure your profile is easy to understand. Your audience should be able to identify who you are and what you offer at first glance. Use clear and concise language in your bio, and don’t hesitate to highlight your unique selling proposition.

    3. Organize your highlights

    Highlights are a great way to showcase your best content and provide quick access to information about your brand. Make sure the content in your highlights is relevant and up-to-date.

    If you’re using Instagram for business purposes, it’s worth considering adding a list of your services, testimonials, before-and-after content, media coverage, education and frequently asked questions (FAQs) to your highlights. This can help potential customers or clients quickly understand what you offer, see the results you can achieve, and find answers to common questions.

    Related: How To Improve Your Engagement on Instagram

    4. Post more Reels

    Reels are a popular feature on Instagram that allows users to create short videos that can be shared with their followers. You can use Reels to showcase your products, share industry tips or give your audience a behind-the-scenes look at your business. Don’t be afraid to get creative with your content and experiment with different styles.

    If you consistently post Reels related to your business with catchy titles and well-made content, don’t be surprised if they bring you both new followers and new customers. Reels are a source of free traffic, and if the algorithm understands what your account is about, it can organically attract people who are interested in your type of services to your page.

    5. Talk to people

    The tone of voice you use on Instagram is important, so be sure to be nice to people and they’ll pay you back. If you sell directly from your page, use chatbots for autofunnels — this could increase your ROI significantly.

    And don’t be afraid being funny – jokes and memes are a great way to add a bit of humor to your Instagram profile and engage with your followers. Many industry leaders, such as Elon Musk, use memes to connect with their audience.

    6. Use relevant hashtags

    Research and use relevant hashtags to increase the visibility of your posts. This will help you reach a wider audience and attract new followers to your Instagram account. Be sure to use only relevant and specific hashtags — generic ones like #love or #fun are not likely to attract your target audience.

    Here’s a secret tip: when you’re about to make a post on Instagram, go to the advanced settings and add relevant hashtags to the ‘Alt text’ section. This can help to promote your content to a relevant audience.

    7. Quality over quantity

    Always prioritize quality over quantity when posting on Instagram. Be sure to never post random content. Something that you might think is funny after a couple of glasses of champagne may not seem like a good idea afterward. Your followers may consider you to be less professional than you would like to appear. Unless you want to have a rockstar image, in which case, go for it — trashy is great then.

    Related: 6 Instagram Marketing Strategies for Small Businesses

    8. Warm up your audience

    Talk to your followers from the perspective of the meaning and value that your service or product brings to the world. Inspire and show what is possible to achieve with your input. Some people may follow you for years before they decide to buy, so be persistent and your social media will pay off!

    In conclusion, creating a professional-looking Instagram profile can greatly benefit your business in terms of brand awareness and sales. As a social media agency owner, I have seen firsthand the impact that a well-curated Instagram profile can have on a business’s success.

    By following these 10 tips, you can elevate your Instagram profile and make it stand out among the millions of other accounts on the platform. From adding a blue mark to your profile to using relevant hashtags, every little detail can make a big difference in how your profile is perceived by potential customers.

    Remember to always put quality over quantity when it comes to your content and engage with your audience in a friendly and approachable tone. By warming up your audience and sharing the value of your product or service, you can build a loyal following that will eventually convert into paying customers.

    While the world of social media is constantly evolving, these tips will help you stay ahead of the curve and create a professional-looking Instagram profile that will help you achieve your business goals. With dedication and persistence, you can use Instagram to grow your brand, connect with your audience and ultimately drive sales.

    Aleksandra Sasha Tikhomirova

    Source link

  • Social media expert gives bird’s-eye view on Twitter spat with NPR, PBS

    Social media expert gives bird’s-eye view on Twitter spat with NPR, PBS

    The decision of social media platform Twitter under ownership of tech mogul Elon Musk to label National Public Radio and the Public Broadcasting Service as “U.S. state-affiliated media” caused the prominent news outlets to respond by ending use of Twitter. This conflict is the latest in an escalating series of conflicts between Musk and media outlets of multiple stripes.

    Mike Horning, an associate professor of multimedia journalism at Virginia Tech’s School of Communication, provides perspective on Twitter’s increasingly volatile relationship with news organizations and the advantages and disadvantages of Musk’s approach.

    Q: Twitter has for years been journalists’ social media platform of choice. Why would Elon Musk push back against this?

    “Since purchasing Twitter, Musk has tried to position himself as the antidote for a tech industry that he believes has been oppressive to both certain forms of speech and certain political views. He sees the media as complicit in supporting those dominant ideologies that are favored by social media companies, so it is not surprising to see him antagonize those forms of media that he feels have not objectively reported news.”

    Q: How might this affect Twitter as a business?

    “So far, it seems that this approach is only further alienating some media companies and some audiences on Twitter. However, research shows that almost half of the audience on Twitter goes there to get news. Musk no doubt knows this and may feel that news organizations will eventually need to come back to Twitter if they want to distribute their content to their audience.”

    Q: What alternatives do news organizations have when it comes to social media platforms?

    “News organizations must grapple with the fact that, given changes to Facebook’s algorithms, their content has less emphasis there. News organizations could perhaps look to TikTok as another place to distribute their content, but with that app currently under congressional scrutiny, that may not be an ideal option. Thus, Twitter still remains an important resource for news organizations that want to get their content into social streams.”

    Q: Is there any way these conflicts work in Elon Musk’s favor?

    “Musk gains a few things by this behavior. General trust in the news media has been on a decline for decades. These trust levels are particularly low among Republicans and independents. By taking on ‘big media,’ Musk is able to position his version of Twitter among those two demographics as a place that may be more open to an exchange of ideas. That may attract new users to Twitter in the future, but so far things haven’t worked out that way.”

    About Michael Horning 
    Mike Horning is an associate professor of multimedia journalism in the Virginia Tech School of Communication. His research examines how communication technologies impact social attitudes and behaviors, with a current focus on the impact of “fake news” and misinformation on our democratic processes. His expertise has been featured in The Hill, on Sinclair Broadcast Group, and in a number of other media outlets. Read more about him here.

    Virginia Tech

    Source link

  • How to Build Their Personal Brand on LinkedIn as an Entrepreneur | Entrepreneur

    How to Build Their Personal Brand on LinkedIn as an Entrepreneur | Entrepreneur

    Opinions expressed by Entrepreneur contributors are their own.

    Business branding and personal branding are now closely intertwined, especially for companies led by knowledgeable and charismatic founders. It’s thus understandable that so many entrepreneurs and C-suite-bound professionals are developing their own online personas and audiences. It’s not enough anymore to simply offer your talents to recruiters; you ideally want to package in a built-in following.

    The best social platform, by far, for nurturing a personal brand is LinkedIn.

    Related: Avoid These 8 Mistakes Leaders Make on LinkedIn Every Day

    Having spent years as a social media consultant, I have seen firsthand how LinkedIn has become a powerful tool for executives to expand their brand’s reach and grow their digital following.

    Here are some reasons company executives should be active on LinkedIn:

    Build credibility and trust

    LinkedIn is a professional social network and is the prime location where people go to learn more about your professional experience and skills. By creating a strong LinkedIn profile and sharing valuable content, you can build credibility and establish yourself as an expert in your industry. This can help build trust with your audience, which is essential for building your personal branding and a successful business.

    Start by enabling Creator Mode on your LinkedIn account. Enabling this will give your account a follow button rather than a connect button, making it so that others will see your posted content without you having to approve every single connection request.

    Creator Mode can aid you in growing your personal brand by expanding your connections and having your content reach a much larger professional audience on LinkedIn.

    Expand your network

    With over 700 million users worldwide, LinkedIn is a great platform for expanding your network. By connecting with other professionals in your industry, you can build relationships that can lead to new business opportunities, partnerships and collaborations. The site also allows you to join groups and discuss with like-minded professionals, which can help you learn and grow in your industry.

    Be bold and add people you don’t know on LinkedIn but have mutual connections with, as growing your reach will require adding users connected to your field of work you may not know personally.

    Adding your LinkedIn URL to your email signature will also direct professionals you communicate with to your page so you can make more consistent connections.

    Related: 5 Tips for Creating LinkedIn Posts That Will Drive Valuable Engagement

    Increase brand visibility

    LinkedIn is a powerful platform for increasing your brand’s visibility. By sharing content, you can reach a wider audience and increase your brand’s reach. You can also use the popular social media network to promote your business and share updates about your company. By being active on the platform, you can ensure that your brand is top of mind for your audience.

    Similar to other social media platforms, hashtags can be a crucial way to increase your personal brand’s exposure. Hashtags function the same way they do on platforms like Instagram and Twitter, and ultimately will boost your engagement and reach with every post.

    There are also a handful of tricks you can utilize when using hashtags to make your posts that much more discoverable. Avoid spacing in your hashtags; for example, if you want to use the hashtag “LinkedIn Executives,” type it out as “#LinkedInExecutives.”

    Don’t go crazy with the number of hashtags in a simple post; one or two per post should do the trick. Keep them short and simple, as shorter hashtags are typically more popular. You can also follow specific hashtags to ensure that specific content makes its way to your feed. You can do so by searching for a specific hashtag and tapping the “follow” button.

    Nailing your personal brand and business tone of voice is also an extremely effective way to grow your personal brand, increase your company’s outreach and create loyal customers. The tone of voice can demonstrate personality and bring visual assets to life. Done correctly, it can be a valuable bulwark against negative situations and build up personality. Even if your products or services are particularly unique or different from what the market offers, that doesn’t mean your approach has to be a dime a dozen as well.

    When considering what your tone of voice should be for your personal or your businesses brand, it is essential to weigh precisely how your want to come across to partners and customers both verbally and in written content, what style guides you want to follow, and how all of this will come to life in action.

    Utilizing hashtags and nailing the correct tone of voice can be the difference between growing your personal brand and having it stuck in the mud.

    Related:

    Attract top talent

    LinkedIn is also a great platform for attracting top talent to your company. By sharing updates about your company and culture, you can showcase what makes your business unique and attract candidates who are a good fit for your organization. You can also use LinkedIn to post job openings and connect with potential candidates.

    In addition to these benefits, LinkedIn is also the ideal platform for building a personal brand. Unlike other social media platforms, LinkedIn is focused on professional development and career advancement. By sharing content and engaging with your audience on the site, you can position yourself as a thought leader in your industry and build a brand that is synonymous with your business.

    So, if you are a company executive looking to expand your brand’s reach, don’t overlook the power of LinkedIn. By being active on the platform, you can build credibility, expand your network, increase your brand’s visibility, and attract top talent. And by building a strong personal brand on LinkedIn, you can help ensure the long-term success of your business.

    Jenny Karn

    Source link

  • Twitter quietly removes policy against

    Twitter quietly removes policy against

    Twitter has quietly erased a policy against the “targeted misgendering or deadnaming of transgender individuals,” raising concerns that the Elon Musk-owned platform may become less safe for marginalized groups.

    In 2018, Twitter had enacted the policy against deadnaming, or using a transgender person’s name before they transitioned, as well as purposefully using the wrong gender for someone as a form of harassment.

    On Monday, Twitter also said it will only put warning labels on some tweets that are “potentially” in violation of its rules against hateful conduct. Previously, the tweets were removed. Within this policy update, it appears that Twitter deleted the line against deadnaming from its rules.

    LGBTQ+ groups said the rollback of the policy could make the platform less safe for marginalized people, and demonstrates that the service is “out of step” with other major social media networks.

    “Twitter’s decision to covertly roll back its longtime policy is the latest example of just how unsafe the company is for users and advertisers alike,” said Sarah Kate Ellis, the president and CEO of the advocacy group GLAAD, in a statement.

    Ellis noted that TikTok, Pinterest and Meta all maintain policies to protect transgender users at a time when anti-transgender rhetoric is on the rise.

    Twitter responded to a request for comment from CBS MoneyWatch with a poop emoji.

    Leaving for safety

    Some Twitter users commented that they plan to leave the platform in response to the policy change. “This will be my last tweet and I will no longer be on Twitter for the sake of my own safety,” one Twitter user wrote on Wednesday. 

    According to GLAAD, Twitter’s Hateful Content Policy had earlier stated, “We prohibit targeting others with repeated slurs, tropes or other content that intends to dehumanize, degrade or reinforce negative or harmful stereotypes about a protected category. This includes targeted misgendering or deadnaming of transgender individuals.”

    But with the latest update, that last sentence has been removed. 

    Billionaire Musk, who bought Twitter last year, has a transgender child. Vivian Jenna Wilson last year legally changed her name and gender, taking on the last name of her mother, a day after turning 18.

    “I no longer live with or wish to be related to my biological father in any way, shape or form,” Wilson stated in court papers at the time.

    Source link

  • Reddit co-founder Alexis Ohanian turns focus to climate change innovations with new foundation

    Reddit co-founder Alexis Ohanian turns focus to climate change innovations with new foundation

    Reddit co-founder Alexis Ohanian turns focus to climate change innovations with new foundation – CBS News


    Watch CBS News



    Five years after stepping away from daily duties at the internet company he co-founded, Alexis Ohanian is pouring money and attention into 776, a funding mechanism that gives $100,000 grants to young climate-focused innovators. He says if he began his career over again, it would start with climate solutions. Ben Tracy reports.

    Be the first to know

    Get browser notifications for breaking news, live events, and exclusive reporting.


    Source link

  • Long COVID Is Being Erased—Again

    Long COVID Is Being Erased—Again

    Updated at 6:29 p.m. ET on April 21, 2023

    Charlie McCone has been struggling with the symptoms of long COVID since he was first infected, in March 2020. Most of the time, he is stuck on his couch or in his bed, unable to stand for more than 10 minutes without fatigue, shortness of breath, and other symptoms flaring up. But when I spoke with him on the phone, he seemed cogent and lively. “I can appear completely fine for two hours a day,” he said. No one sees him in the other 22.  He can leave the house to go to medical appointments, but normally struggles to walk around the block. He can work at his computer for an hour a day. “It’s hell, but I have no choice,” he said. Like many long-haulers, McCone is duct-taping himself together to live a life—and few see the tape.

    McCone knows 12 people in his pre-pandemic circles who now also have long COVID, most of whom confided in him only because “I’ve posted about this for three years, multiple times a week, on Instagram, and they’ve seen me as a resource,” he said. Some are unwilling to go public, because they fear the stigma and disbelief that have dogged long COVID. “People see very little benefit in talking about this condition publicly,” he told me. “They’ll try to hide it for as long as possible.”

    I’ve heard similar sentiments from many of the dozens of long-haulers I’ve talked with, and the hundreds more I’ve heard from, since first reporting on long COVID in June 2020. Almost every aspect of long COVID serves to mask its reality from public view. Its bewilderingly diverse symptoms are hard to see and measure. At its worst, it can leave people bed- or housebound, disconnected from the world. And although milder cases allow patients to appear normal on some days, they extract their price later, in private. For these reasons, many people don’t realize just how sick millions of Americans are—and the invisibility created by long COVID’s symptoms is being quickly compounded by our attitude toward them.

    Most Americans simply aren’t thinking about COVID with the same acuity they once did; the White House long ago zeroed in on hospitalizations and deaths as the measures to worry most about. And what was once outright denial of long COVID’s existence has morphed into something subtler: a creeping conviction, seeded by academics and journalists and now common on social media, that long COVID is less common and severe than it has been portrayed—a tragedy for a small group of very sick people, but not a cause for societal concern. This line of thinking points to the absence of disability claims, the inconsistency of biochemical signatures, and the relatively small proportion of severe cases as evidence that long COVID has been overblown. “There’s a shift from ‘Is it real?’ to ‘It is real, but …,’” Lekshmi Santhosh, the medical director of a long-COVID clinic at UC San Francisco, told me.

    Yet long COVID is a substantial and ongoing crisis—one that affects millions of people. However inconvenient that fact might be to the current “mission accomplished” rhetoric, the accumulated evidence, alongside the experience of long haulers, makes it clear that the coronavirus is still exacting a heavy societal toll.


    As it stands, 11 percent of adults who’ve had COVID are currently experiencing symptoms that have lasted for at least three months, according to data collected by the Census Bureau and the CDC through the national Household Pulse Survey. That equates to more than 15 million long-haulers, or 6 percent of the U.S. adult population. And yet, “I run into people daily who say, ‘I don’t know anyone with long COVID,’” says Priya Duggal, an epidemiologist and a co-lead of the Johns Hopkins COVID Long Study. The implication is that the large survey numbers cannot be correct; given how many people have had COVID, we’d surely know if one in 10 of our contacts was persistently unwell.

    But many factors make that unlikely. Information about COVID’s acute symptoms was plastered across our public spaces, but there was never an equivalent emphasis that even mild infections can lead to lasting and mercurial symptoms; as such, some people who have long COVID don’t even know what they have. This may be especially true for the low-income, rural, and minority groups that have borne the greatest risks of infection. Lisa McCorkell, a long-hauler who is part of the Patient-Led Research Collaborative, recently attended a virtual meeting of Bay Area community leaders, and “when I described what it is, some people in the chat said, ‘I just realized I might have it.’”

    Admitting that you could have a life-altering and long-lasting condition, even to yourself, involves a seismic shift in identity, which some people are understandably loath to make. “Everyone I know got Omicron and got over it, so I really didn’t want to concede that I didn’t survive this successfully,” Jennifer Senior, a friend and fellow staff writer at The Atlantic, who has written about her experience with long COVID, told me. Duggal mentioned an acquaintance who, after a COVID reinfection, can no longer walk the quarter mile to pick her kids up from school, or cook them dinner. But she has turned down Duggal’s offer of an appointment; instead, she is moving across the country for a fresh start. “That is common: I won’t call it ‘long COVID’; I’ll just change everything in my life,” Duggal told me. People who accept the condition privately may still be silent about it publicly. “Disability is often a secret we keep,” Laura Mauldin, a sociologist who studies disability, told me. One in four Americans has a disability; one in 10 has diabetes; two in five have at least two chronic diseases. In a society where health issues are treated with intense privacy, these prevalence statistics, like the one-in-10 figure for long COVID, might also intuitively feel like overestimates.

    Some long-haulers are scared to disclose their condition. They might feel ashamed for still being sick, or wary about hearing from yet another loved one or medical professional that there’s nothing wrong with them. Many long-haulers worry that they’ll be perceived as weak or needy, that their friends will stop seeing them, or that employers will treat them unfairly. Such fears are well founded: A British survey of almost 1,000 long-haulers found that 63 percent experienced overt discrimination because of their illness at least “sometimes,” and 34 percent sometimes regretted telling people that they have long COVID. “So many people in my life have reached out and said, ‘I’m experiencing this,’ but they’re not telling the rest of our friends,” McCorkell said.

    Imagine that you interact with 50 people on a regular basis, all of whom got COVID. If 10 percent are long-haulers, that’s five people who are persistently sick. Some might not know what long COVID is or might be unwilling to confront it. The others might have every reason to hide their story. “Numbers like 10 percent are not going to naturally present themselves in front of you,” McCone told me. Instead, “you’ll hear from 45 people that they are completely fine.”

    Illustration by Paul Spella / The Atlantic; Getty

    The same factors that stop people from being public about their condition—ignorance, denial, or concerns about stigma—also make them less likely to file for disability benefits. And that process is, to put it mildly, not easy. Applicants need thorough medical documentation; many long-haulers struggle to find doctors who believe their symptoms are real. Even with the right documents, applicants must hack their way through bureaucratic overgrowth, likely while fighting fatigue or brain fog. For these reasons, attempting to measure long COVID through disability claims is a profoundly flawed exercise. Even if people manage to apply, they face an average wait time of seven months and a two-in-three denial rate. McCone took six weeks to put an application together, and, despite having a lawyer and extensive medical documentation, was denied after one day. McCorkell knows many first-wavers—people who’ve had long COVID since March 2020—“who are just getting their approvals now.”

    An alternative source of data comes from the Census Bureau’s Current Population Survey, which simply asks working-age Americans if they have any of six forms of disability. Using that data, Richard Deitz, an economics-research adviser at the Federal Reserve Bank of New York, calculated that about 1.7 million more people now say they do than in mid-2020, reversing a years-long decline. These numbers are lower than expected if one in 10 people who gets COVID really does become a long-hauler, but the survey doesn’t directly capture many of the condition’s most common symptoms, such as fatigue, neurological problems beyond brain fog, and post-exertional malaise, where a patient’s symptoms get dramatically worse after physical or mental exertion. About 900,000 of the newly disabled people are also still working. David Putrino, who leads a long-COVID rehabilitation clinic at Mount Sinai, told me that many of his patients are refused the accommodations required under the Americans With Disabilities Act. Their employers won’t allow them to work remotely or reduce their hours, because, he said, “you look at them and don’t see an obvious disability.”


    Long COVID can also seem bafflingly invisible when people look at it with the wrong tools. For example, a 2022 study by National Institutes of Health researchers compared 104 long-haulers with 85 short-term COVID patients and 120 healthy people and found no differences in measures of heart or lung capacities, cognitive tests, or levels of common biomarkers—bloodstream chemicals that might indicate health problems. This study has been repeatedly used as evidence that long COVID might be fictitious or psychosomatic, but in an accompanying editorial, Aluko Hope, the medical director of Oregon Health and Science University’s long-COVID program, noted that the study exactly mirrors what long-haulers commonly experience: They undergo extensive testing that turns up little and are told, “Everything is normal and nothing is wrong.”

    The better explanation, Putrino told me, is that “cookie-cutter testing” doesn’t work—a problem that long COVID shares with other neglected complex illnesses, such as myalgic encephalomyelitis/chronic-fatigue syndrome and dysautonomia. For example, the NIH study didn’t consider post-exertional malaise, a cardinal symptom of both ME/CFS and long COVID; measuring it requires performing cardiopulmonary tests on two successive days. Most long-haulers also show spiking heart rates when asked to simply stand against a wall for 10 minutes—a sign of problems with their autonomic nervous system. “These things are there if you know where to look,” Putrino told me. “You need to listen to your patients, hear where the virus is affecting them, and test accordingly.”

    Contrary to popular belief, researchers have learned a huge amount about the biochemical basis of long COVID, and have identified several potential biomarkers for the disease. But because long COVID is likely a cluster of overlapping conditions, there might never be a singular blood test that “will tell you if you have long COVID 100 percent of the time,” Putrino said. The best way to grasp the scale of the condition, then, is still to ask people about their symptoms.

    Large attempts to do this have been relatively consistent in their findings: The U.S. Household Pulse Survey estimates that one in 10 people who’ve had COVID currently have long COVID; a large Dutch study put that figure at one in eight. The former study also estimated that 6 percent of American adults are long-haulers; a similar British survey by the Office for National Statistics estimated that 3 percent of the general population is. These cases vary widely in severity, and about one in five long-haulers is barely affected by their symptoms—but the remaining majority very much is. Another one in four long-haulers (or 4 million Americans) has symptoms that severely limit their daily activities. The others might, at best, wake every day feeling as if they haven’t had any rest, or feel trapped in an endless hangover. They might work or socialize when their tidal symptoms ebb, but only by making big compromises: “If I work a full day, I can’t also then make dinner or parent without significant suffering,” JD Davids, who has both long COVID and ME/CFS, told me.

    Some people do recover. A widely cited Israeli study of 1.9 million people used electronic medical records to show that most lingering COVID symptoms “are resolved within a year from diagnosis,” but such data fail to capture the many long-haulers who give up on the medical system precisely because they aren’t getting better or are done with being disbelieved. Other studies that track groups of long-haulers over time have found less rosy results. A French one found that 85 percent of people who had symptoms two months after their infection were still symptomatic after a year. A Scottish team found that 42 percent of its patients had only partially recovered at 18 months, and 6 percent had not recovered at all. The United Kingdom’s national survey shows that 69 percent of people with long COVID have been dealing with symptoms for at least a year, and 41 percent for at least two.

    The most recent data from the U.S. and the U.K. show that the total number of long-haulers has decreased over the past six months, which certainly suggests that people recover in appreciable numbers. But there’s a catch: In the U.K., the number of people who have been sick for more than a year, or who are severely limited by their illness, has gone up. A persistent pool of people is still being pummeled by symptoms—and new long-haulers are still joining the pool. This influx should be slower than ever, because Omicron variants seem to carry a lower risk of triggering long COVID, while vaccines and the drug Paxlovid can lower that risk even further. But though the odds against getting long COVID are now better, more people are taking a gamble, because preventive precautions have been all but abandoned.

    Even if prevalence estimates were a tenth as big, that would still mean more than 1 million Americans are dealing with a chronic illness that they didn’t have three years ago. “When long COVID first came on the scene, everyone told us that once we have the prevalence numbers, we can do something about it,” McCorkell told me. “We got those numbers. Now people say, ‘Well, we don’t believe them. Try again.’”


    To a degree, I sympathize with some of the skepticism regarding long COVID, because the condition challenges our typical sense of what counts as solid evidence. Blood tests, electronic medical records, and disability claims all feel like rigorous lines of objective data. Their limitations become obvious only when you consider what the average long-hauler goes through—and those details are often cast aside because they are “anecdotal” and, by implication, unreliable. This attitude is backwards: The patients’ stories are the ground truth against which all other data must be understood. Gaps between the data and the stories don’t immediately invalidate the latter; they just as likely show the holes in the former.

    Laura Mauldin, the disability sociologist, argues that the U.S. is primed to discount those experiences because the country’s values—exceptionalism, strength, self-reliance—have created what she calls the myth of the able-bodied public. “We cannot accept that our bodies are fallible, or that disability is utterly ordinary and expected,” she told me. “We go to great pains to pretend as though that is not the case.” If we believe that a disabling illness like long COVID is rare or mild, “we protect ourselves from having to look at it.” And looking away is that much easier because chronic illnesses like long COVID are more likely to affect women—“who are more likely to have their symptoms attributed to psychological problems,” Mauldin said—and because the American emphasis on work ethic devalues people who can’t work as much or as hard as their peers.

    Other aspects of long COVID make it hard to grasp. Like other similar, neglected chronic illnesses, it defies a simplistic model of infectious disease in which a pathogen causes a predictable set of easily defined symptoms that alleviate when the bug is destroyed. It challenges our belief in our institutions, because truly contending with what long-haulers go through means acknowledging how poorly the health-care system treats chronically ill patients, how inaccessible social support is to them, and how many callous indignities they suffer at the hands of even those closest to them. Long COVID is a mirror on our society, and the image it reflects is deeply unflattering.

    Most of all, long COVID is a huge impediment to the normalization of COVID. It’s an insistent indicator that the pandemic is not actually over; that policies allowing the coronavirus to spread freely still carry a cost; that improvements such as better indoor ventilation are still wanting; that the public emergency may have been lifted but an emergency still exists; and that millions cannot return to pre-pandemic life. “Everyone wants to say goodbye to COVID,” Duggal told me, “and if long COVID keeps existing and people keep talking about it, COVID doesn’t go away.” The people who still live with COVID are being ignored so that everyone else can live with ignoring it.


    This article originally misstated the name of the bank where Richard Deitz works.

    Ed Yong

    Source link

  • Twitter removes policy against deadnaming transgender people

    Twitter removes policy against deadnaming transgender people

    Twitter has quietly removed a policy against the “targeted misgendering or deadnaming of transgender individuals.”

    ByBARBARA ORTUTAY AP Technology Writer

    SAN FRANCISCO — Twitter has quietly removed a policy against the “targeted misgendering or deadnaming of transgender individuals,” raising concerns that the Elon Musk-owned platform is becoming less safe for marginalized groups.

    Twitter enacted the policy against deadnaming, or using a transgender person’s name before they transitioned, as well as purposefully using the wrong gender for someone as a form of harassment, in 2018.

    On Monday, Twitter also said it will only put warning labels on some tweets that are “potentially” in violation of its rules against hateful conduct. Previously, the tweets were removed.

    It was in this policy update that Twitter appears to have deleted the line against deadnaming from its rules.

    “Twitter’s decision to covertly roll back its longtime policy is the latest example of just how unsafe the company is for users and advertisers alike,” said Sarah Kate Ellis, the president and CEO of the advocacy group GLAAD. “This decision to roll back LGBTQ safety pulls Twitter even more out of step with TikTok, Pinterest, and Meta, which all maintain similar policies to protect their transgender users at a time when anti-transgender rhetoric online is leading to real world discrimination and violence.”

    Twitter did immediately respond to a message for comment Tuesday.

    Source link

  • 7 Short-Form Video Mistakes to Avoid in Your Marketing Strategy | Entrepreneur

    7 Short-Form Video Mistakes to Avoid in Your Marketing Strategy | Entrepreneur

    Opinions expressed by Entrepreneur contributors are their own.

    Over the past few years, short-form video has become one of the most engaging and fastest-growing content types across social media platforms.

    TikTok, YouTube Shorts and Instagram Reels have all seen meteoric growth, garnering billions of users. According to Statista, Shorts alone boasts 30 billion views daily and 1.5 billion monthly active users in 2023.

    Consequently, vertical short-form video has immense potential for digital marketers and businesses alike, and many now incorporate it into their strategies.

    However, there are several typical pitfalls you need to dodge to leverage the full power of vertical video. Here are the seven most common short-form video mistakes to avoid in your marketing strategy.

    1. Expecting instant results

    First off, it’s essential to keep your expectations realistic. While short-form video often gets high engagement and can go viral, don’t expect your follower count to explode overnight.

    In the vast majority of cases, growing a following through short-form content still takes time, effort, and consistency. Especially if you don’t have an existing baseline activity on your Instagram profile or YouTube channel, the Reels and Shorts algorithm can be torpid.

    An awareness of this is crucial when setting milestones for your marketing strategy, helping you draw up realistic plans and preventing disappointment.

    Related: Top 5 Not-So-Obvious Social Media Marketing Mistakes You Must Avoid

    2. Neglecting (brand) consistency

    As mentioned already, consistency is key when creating short-form content, especially if you’re setting up new profiles.

    This doesn’t just mean regularly uploading new clips. It also means producing content with consistent quality and branding.

    The quality of your videography is key for engagement. And consistent branding — everything from editing style to logos and caption fonts — determines how memorable and recognizable your clips are.

    When drawing up your short-form strategy, investing time and resources in these branding aspects in advance is well worth it in the long run.

    3. Posting irrelevant clips

    The next major pitfall for your short-form strategy is the type of content you produce.

    Ultimately, your aim is to increase brand awareness, highlight your expertise and your products — and to convert viewers into customers.

    That means your content has to be relevant to these goals.

    Let’s say you are a graphic design agency. There is little point in putting efforts into reproducing TikTok dances or engaging in challenges.

    Instead, focus on making your business relatable — e.g., “A day in the life of a graphic designer” — or showcasing your skills with hacks, demos and how-tos.

    Related: 8 Ways to Avoid Common Video Marketing Mistakes

    4. Making content too long and complex

    Short-form content on some platforms can run up to 2 minutes and 30 seconds. If you’re not used to producing clips like this, it can be tempting to exploit this limit to the fullest.

    In most cases, this is a mistake.

    While it is possible to make longer videos, shorter ones are still more successful. According to information TikTok shared with select creators in 2022, later reported by WIRED, approximately 25% of the most successful videos on the platform are between 21 and 34 seconds long.

    For Instagram Reels, the recommended duration is even shorter, with some industry experts putting it at a mere 7 to 15 seconds.

    The bottom line? Keep your content short and zesty.

    That means reducing the complexity of your message and the number of ideas you can communicate in a single clip. In most cases, focusing on bringing across one central idea is best.

    Another implication of this short recommended video length is that it’s essential to put extra effort into your hook. The first few seconds of your video have to immediately captivate your viewers’ attention — they have to pack visual panache and the promise of information and entertainment.

    5. Losing track of your target audience

    Another common mistake many businesses make when integrating short-form content into their marketing strategy is losing track of their target audience.

    Your marketing strategy should already be based around a clearly defined target audience and buyer personas. Short-form video content is no different.

    However, there are several adjustments you need to make. Short-form content is particularly popular among younger audiences, Gen Z and Millennials in particular. According to data released by Kepios in early 2023, the vast majority of TikTok’s above-18 ad audience is composed of people aged 18-24 (39%) and those aged 25-34 (32%).

    While older generations are slowly catching on to the use of short-form content, especially on Instagram and YouTube, the typical vertical video viewer is under 35. How you present your business needs to be adjusted for that.

    6. Not including captions

    On the technical side, a common shortcoming of short-form video content published by businesses is the lack of captions. It is a distinguishing feature of platforms like Shorts, Reels and TikTok that many viewers prefer to watch content on mute.

    According to recent statistics, 69% of viewers watch videos without sound, especially when in public. Consequently, they tend to scroll past clips that lack captions.

    In addition, well-designed captions with appropriate fonts, backgrounds, and colors can act as additional visual incentives and boost your overall engagement.

    Related: Add Captions to Your TikTok and Instagram Videos and Gain More Reach

    7. Forgetting your call to action

    Finally, one of the most common mistakes in short-form video for business purposes is to forget your call to action (CTA).

    Just getting viewers to watch your video is not the endgame. It’s to get them to take a particular action — to check out your services, start a trial, subscribe to your newsletter, follow your accounts, buy your products.

    That’s why including a CTA is essential, even in the shortest of your videos. You can include it in your script, captions, overlay, copy and comments. But you need to include it.

    The bottom line

    Short-form content on Reels, TikTok and Shorts has immense potential for boosting your business’ visibility.

    However, to succeed, you need to avoid the most common mistakes many businesses make when integrating vertical clips into their digital marketing strategy.

    By circumventing the pitfalls above, you’ll be able to elevate your brand using short-form content and avoid frustration along the way.

    Hasan Saleem

    Source link

  • AI-generated song not by Drake and The Weeknd pulled off digital platforms

    AI-generated song not by Drake and The Weeknd pulled off digital platforms

    London — A song that clones the voices of A-list musicians Drake and The Weeknd using artificial intelligence was pulled from social media and music streaming platforms Tuesday following a backlash from publishing giant Universal Music Group, which said the song violated copyright law.

    The AI-generated song, “Heart on My Sleeve,” went viral over the weekend, racking up more than 8.5 million views on TikTok before being pulled off the platform Tuesday. The song, which the artists have never actually sung, was also pulled off many YouTube channels, though versions were still available on both platforms.

    Billboard Awards
    Drake performs onstage in Toronto on Oct. 8, 2016, left, and The Weeknd performs during the halftime show of the NFL Super Bowl 55 football game on Feb. 7, 2021, in Tampa, Fla.

    AP


    The full version was played 254,000 times on Spotify before being yanked by the leading music streaming platform.

    Universal Music Group, which releases music by both Drake and The Weeknd, was quoted by the BBC as saying digital platforms have a “legal and ethical responsibility” to prevent the use of services that harm artists.

    The creator of the song, who’s been identified only by the handle “@ghostwriter,” claimed on their now-deleted YouTube account that the track was created using AI software trained on the musicians’ voices from existing video clips.  

    “I think that is part of what is making it difficult for the untrained ear to differentiate between these AI-generated and non-AI generated tunes,” music journalist Hattie Lindert told CBS News on Tuesday. “It’s pretty convincing when there are so many Drake tracks that AI can train from.”


    Google CEO: AI impact to be more profound than discovery of fire, electricity

    06:02

    Neither artist has reacted publicly to the song, but Drake had previously been critical of his voice being cloned using artificial intelligence.

    “This is the final straw, AI,” he said in a now-deleted post on Instagram after seeing a fan-made AI-generated video in which he appeared to be rapping.

    This latest AI controversy comes as tech giants Microsoft and Google look set to go head-to-head as they develop competing AI-powered “chatbot” technology, following the launch of Google’s Bard AI software last month.

    “AI itself will pose its own problems. Could Hemingway write a better short story? Maybe. But Bard can write a million before Hemingway could finish one,” Google Senior Vice President James Manyika told “60 Minutes” correspondent Scott Pelley in an interview that aired on Sunday. “Imagine that level of automation across the economy.”

    Source link

  • How is TikTok affecting our mental health? It’s complicated, new U of M study shows

    How is TikTok affecting our mental health? It’s complicated, new U of M study shows

    Newswise — With the rise of TikTok, many people have wondered about its potential impacts on society, in particular surrounding mental health. According to a first-of-its-kind study from University of Minnesota Twin Cities computer science researchers, the social media platform and its unique algorithm can serve as both a haven and a hindrance for users struggling with their mental state. 

    The researchers’ study will be published in the proceedings of the Association for Computing Machinery (ACM) Conference on Human Factors in Computing Systems. They will be presenting their research at the upcoming conference happening April 23-28.

    Through interviews with TikTok users, the University of Minnesota team found that the platform provided many people with a sense of self-discovery and community they were unable to find on other social media. However, the researchers said, the TikTok algorithm also displayed a worrying tendency to repeatedly expose users to content that could be harmful to their mental health.

    “TikTok is misunderstood by people who don’t use the platform,” explained Stevie Chancellor, senior author of the paper and an assistant professor in the University of Minnesota Department of Computer Science & Engineering. “They think of it as the dance platform or the place where everybody gets an ADHD diagnosis. Our research shows that TikTok helps people find community and mental health information. But, people should also be mindful of its algorithm, how it works, and when the system is providing them things that are harmful to their wellbeing.”

    TikTok is different from other social media platforms in that it is primarily run by a recommender system algorithm that displays videos it thinks you will like on your “For You Page” feed, as opposed to mostly showing posts from accounts you follow. While this can be great for showing you more content that you like, it can also lead to a rabbit hole of negative content that’s nearly impossible to escape from, the researchers said. 

    “TikTok is a huge platform for mental health content,” said Ashlee Milton, first author of the paper and a University of Minnesota computer science and engineering Ph.D. student. “People tend to gravitate toward social media to find information and other people who are going through similar situations. A lot of our participants talked about how helpful this mental health information was. But at some point, because of the way the feed works, it’s just going to keep giving you more and more of the same content. And that’s when it can go from being helpful to being distressing and triggering.”

    The researchers found that when users get into harmful spirals of negative content, there often is no escape. The TikTok interface includes a “Not interested” button, but the study participants said it didn’t make any difference in the content that appeared in their feeds. 

    The research participants also expressed that it’s difficult to discern when TikTok creators are posting emotional or intense mental health content genuinely, or if they’re just “chasing clout” to gain more followers and likes. Many participants were forced to take breaks or quit using the platform entirely because of the stress it caused.

    According to the University of Minnesota researchers, all of this doesn’t mean TikTok is evil. But, they said, it is useful information to keep in mind when using the platform, especially for mental health purposes.

    “One of our participants jokingly referred to the For You page as a ‘dopamine slot machine,’” Milton said. “They talked about how they would keep scrolling just so that they could get to a good post because they didn’t want to end on a bad post. It’s important to be able to recognize what is happening and say, ‘Okay, let’s not do that.’”

    This study is the first in a series of papers Chancellor and Milton plan on writing about social media, TikTok, and mental health.

    “Ashlee and I are interested in how platforms may promote harmful behaviors to a person so that eventually, we can design strategies to mitigate those bad outcomes,” Chancellor said. “The first step in this process is interviewing people to make sure we understand their experiences on TikTok. We need insights from people before we as computer scientists go in and design to fix this problem.”

    In addition to Chancellor and Milton, the research team included University of Minnesota Twin Cities computer science and engineering Ph.D. student Leah Ajmani and University of Colorado Boulder researcher Michael Ann DeVito.

    University of Minnesota College of Science and Engineering

    Source link

  • The 5 Biggest Trends Changing Mobile Entertainment | Entrepreneur

    The 5 Biggest Trends Changing Mobile Entertainment | Entrepreneur

    Opinions expressed by Entrepreneur contributors are their own.

    Mobile entertainment is now a multi-billion-dollar global industry, evolving at breakneck speed as technological advances unlock new possibilities and shape consumer preferences in new and unexpected ways.

    Here is a look at the top five trends changing this industry today:

    1. Bite-sized, mobile-first entertainment

    Mobile phones and tablets have become ubiquitous, and user expectations are shifting towards mobile-first experiences optimized for smaller screens as a result. At the same time, leisure time is increasingly becoming a luxury as the pace of life for the active part of the population continues to speed up. One consequence is that users are increasingly drawn to content that can be enjoyed quickly and easily on the go. We have witnessed the rise of platforms like TikTok, Instagram Reels, YouTube Shorts and Yepp, serving up user-generated short-form content to a broad range of audiences.

    While there is a lot of discussion about the addictive properties of short-form entertainment, screen time regulation and age restrictions for platforms that offer bite-sized mobile fun, one thing is clear — this type of content has true mass appeal and is likely to remain a major fixture in the mobile entertainment space for the foreseeable future.

    Related: 4 Tech Trends Shaping the Future of Media and Entertainment

    2. Better connectivity

    More reliable connectivity, faster speed and greater proliferation of 5G are also transforming mobile entertainment in their own ways. Better connectivity enables developers to serve up more interactive experiences and data “heavy” formats, such as video streaming and conferencing, audio streaming, podcasting and networked gaming. This democratizes the creation of high-quality live content, which is no longer the exclusive turf of big broadcasting corporations, nor is it reliant upon wifi connectivity and a desktop device.

    In addition, the speed and coverage of 5G networks enable more precise location-based services. These enhance mobile entertainment experiences, such as augmented reality games or virtual tours, enabling a more immersive user experience.

    With the ability to provide higher-quality and more engaging content, mobile entertainment businesses can unlock new revenue streams, such as subscription-based services or pay-per-view options. By opening the door to more prosperous, more interactive, and more immersive content that can be consumed on the go, improved connectivity directly impacts the possibilities for entertainment on mobile devices and fuelling industry growth.

    3. AI and machine learning

    Artificial intelligence (AI) has a profound effect on mobile entertainment. Using AI-based tools such as machine learning helps developers improve and optimize backend processes like streamlining repetitive tasks, improving content moderation, and enabling leaner teams to achieve results. It also helps provide the more targeted, personalized entertainment experience that consumers have come to expect – serving up content based on a user’s interests and past viewing behavior.

    While AI is also making it easier to generate content, including text, images and video, users are increasingly looking for content that feels authentic and relatable – something that is still hard, if not impossible, for AI to produce.

    Therefore, when it comes to funny videos, fun memes and similar entertainment, user-generated content is still king for now, while AI works backstage to enhance how it is delivered and consumed.

    Related: The FBI Says Hackers Are Using Public Phone Chargers to Steal Your Information. Here’s How To Avoid Falling Victim to the Scam.

    4. Social media integration

    An argument has been made that mobile technologies are making us less sociable as a society, with some even ringing alarm bells that the art of casual in-person communication is in danger of being lost. After all, look around when riding the subway, and you’ll see most of your fellow passengers with their heads bent over their mobile devices, completely oblivious to their surroundings and more often than not entirely uninterested in striking up any conversations with their fellow passengers (which is not such a bad thing, to be honest). However, within the confines of the digital world, the opposite trend is underway, and consumers increasingly expect entertaining content that is much more social and interactive.

    Users are no longer passive consumers who just want to play a game or watch a video. Increasingly, they prefer to interact with other players, share their memes, comment on the videos they watch and otherwise engage with their digital communities and audiences. This trend is prompting the integration of social media functionality into mobile entertainment apps, providing more opportunities for users to interact with others online and within their digital communities.

    Related: How to Think Outside Your Industry and Revolutionize the Customer Journey

    5. AR and VR

    Advances in augmented reality (AR) and virtual reality (VR) tech have opened new possibilities for mobile entertainment. AR technology allows users to overlay digital content on top of the real world, creating a more engaging and interactive experience for users. Sharing features within social apps enable users to capture and share their AR experiences, such as swapping faces in photos or putting funny filters on images. AR also enables location-based experiences in social apps, which can be used for real-world events or virtual events. Users can interact with digital content tied to their physical location, participate in AR-based scavenger hunts and other location-based games, or engage in pretend play, such as trying on countless pairs of e-sneakers.

    As a result of the many AR- and VR-enabled features coming to the market, consumers are starting to expect more immersive, personalized, interactive, real-time, multimodal, and accessible experiences, prompting a higher level of competition among gaming and mobile entertainment companies to meet these expectations.

    Max Kraynov

    Source link

  • Deepfake pornography could be a growing problem as AI editing programs become more sophisticated

    Deepfake pornography could be a growing problem as AI editing programs become more sophisticated

    Artificial intelligence imaging can be used to create art, try on clothes in virtual fitting rooms or help design advertising campaigns. But experts fear the darker side of the easily accessible tools could worsen something that primarily harms women: nonconsensual “deepfake” pornography.

    Deepfakes are videos and images that have been digitally created or altered with artificial intelligence or machine learning. Porn created using the technology first began spreading across the internet several years ago when a Reddit user shared clips that placed the faces of female celebrities on the shoulders of porn actors.

    Since then, deepfake creators have disseminated similar videos and images targeting online influencers, journalists and others with a public profile. Thousands of videos exist across a plethora of websites. And some have been offering users the opportunity to create their own images — essentially allowing anyone to turn whoever they wish into sexual fantasies without their consent, or use the technology to harm former partners.


    Creating a “lie detector” for deepfakes

    05:36

    Easier to create and more difficult to detect

    The problem, experts say, grew as it became easier to make sophisticated and visually compelling deepfakes. And they say it could get worse with the development of generative AI tools that are trained on billions of images from the internet and spit out novel content using existing data.

    “The reality is that the technology will continue to proliferate, will continue to develop and will continue to become sort of as easy as pushing the button,” said Adam Dodge, the founder of EndTAB, a group that provides trainings on technology-enabled abuse. “And as long as that happens, people will undoubtedly … continue to misuse that technology to harm others, primarily through online sexual violence, deepfake pornography and fake nude images.”

    Artificial images, real harm

    Noelle Martin, of Perth, Australia, has experienced that reality. The 28-year-old found deepfake porn of herself 10 years ago when out of curiosity one day she used Google to search an image of herself. To this day, Martin said she doesn’t know who created the fake images, or videos of her engaging in sexual intercourse that she would later find. She suspects someone likely took a picture posted on her social media page or elsewhere and doctored it into porn.

    Horrified, Martin contacted different websites for a number of years in an effort to get the images taken down. Some didn’t respond. Others took it down but she soon found it up again.

    “You cannot win,” Martin said. “This is something that is always going to be out there. It’s just like it’s forever ruined you.”

    The more she spoke out, she said, the more the problem escalated. Some people even told her the way she dressed and posted images on social media contributed to the harassment — essentially blaming her for the images instead of the creators.

    Eventually, Martin turned her attention towards legislation, advocating for a national law in Australia that would fine companies 555,000 Australian dollars ($370,706) if they don’t comply with removal notices for such content from online safety regulators.

    But governing the internet is next to impossible when countries have their own laws for content that’s sometimes made halfway around the world. Martin, currently an attorney and legal researcher at the University of Western Australia, said she believes the problem has to be controlled through some sort of global solution.

    In the meantime, some AI models say they’re already curbing access to explicit images.


    Art created by artificial Intelligence

    06:53

    Removing AI’s access to explicit content

    OpenAI said it removed explicit content from data used to train the image generating tool DALL-E, which limits the ability of users to create those types of images. The company also filters requests and said it blocks users from creating AI images of celebrities and prominent politicians. Midjourney, another model, blocks the use of certain keywords and encourages users to flag problematic images to moderators.

    Meanwhile, the startup Stability AI rolled out an update in November that removes the ability to create explicit images using its image generator Stable Diffusion. Those changes came following reports that some users were creating celebrity inspired nude pictures using the technology.

    Stability AI spokesperson Motez Bishara said the filter uses a combination of keywords and other techniques like image recognition to detect nudity and returns a blurred image. But it’s possible for users to manipulate the software and generate what they want since the company releases its code to the public. Bishara said Stability AI’s license “extends to third-party applications built on Stable Diffusion” and strictly prohibits “any misuse for illegal or immoral purposes.”

    Some social media companies have also been tightening up their rules to better protect their platforms against harmful materials.

    TikTok, Twitch, others update policies

    TikTok said last month all deepfakes or manipulated content that show realistic scenes must be labeled to indicate they’re fake or altered in some way, and that deepfakes of private figures and young people are no longer allowed. Previously, the company had barred sexually explicit content and deepfakes that mislead viewers about real-world events and cause harm.

    The gaming platform Twitch also recently updated its policies around explicit deepfake images after a popular streamer named Atrioc was discovered to have a deepfake porn website open on his browser during a livestream in late January. The site featured phony images of fellow Twitch streamers.

    Twitch already prohibited explicit deepfakes, but now showing a glimpse of such content — even if it’s intended to express outrage — “will be removed and will result in an enforcement,” the company wrote in a blog post. And intentionally promoting, creating or sharing the material is grounds for an instant ban.

    Other companies have also tried to ban deepfakes from their platforms, but keeping them off requires diligence.

    Apple and Google said recently they removed an app from their app stores that was running sexually suggestive deepfake videos of actresses to market the product. Research into deepfake porn is not prevalent, but one report released in 2019 by the AI firm DeepTrace Labs found it was almost entirely weaponized against women and the most targeted individuals were western actresses, followed by South Korean K-pop singers.

    The same app removed by Google and Apple had run ads on Meta’s platform, which includes Facebook, Instagram and Messenger. Meta spokesperson Dani Lever said in a statement the company’s policy restricts both AI-generated and non-AI adult content and it has restricted the app’s page from advertising on its platforms.

    Take It Down tool

    In February, Meta, as well as adult sites like OnlyFans and Pornhub, began participating in an online tool, called Take It Down, that allows teens to report explicit images and videos of themselves from the internet. The reporting site works for regular images, and AI-generated content — which has become a growing concern for child safety groups.

    “When people ask our senior leadership what are the boulders coming down the hill that we’re worried about? The first is end-to-end encryption and what that means for child protection. And then second is AI and specifically deepfakes,” said Gavin Portnoy, a spokesperson for the National Center for Missing and Exploited Children, which operates the Take It Down tool.

    “We have not … been able to formulate a direct response yet to it,” Portnoy said.

    Source link

  • Deepfake porn could be a growing problem amid AI race

    Deepfake porn could be a growing problem amid AI race

    NEW YORK — Artificial intelligence imaging can be used to create art, try on clothes in virtual fitting rooms or help design advertising campaigns.

    But experts fear the darker side of the easily accessible tools could worsen something that primarily harms women: nonconsensual deepfake pornography.

    Deepfakes are videos and images that have been digitally created or altered with artificial intelligence or machine learning. Porn created using the technology first began spreading across the internet several years ago when a Reddit user shared clips that placed the faces of female celebrities on the shoulders of porn actors.

    Since then, deepfake creators have disseminated similar videos and images targeting online influencers, journalists and others with a public profile. Thousands of videos exist across a plethora of websites. And some have been offering users the opportunity to create their own images — essentially allowing anyone to turn whoever they wish into sexual fantasies without their consent, or use the technology to harm former partners.

    The problem, experts say, grew as it became easier to make sophisticated and visually compelling deepfakes. And they say it could get worse with the development of generative AI tools that are trained on billions of images from the internet and spit out novel content using existing data.

    “The reality is that the technology will continue to proliferate, will continue to develop and will continue to become sort of as easy as pushing the button,” said Adam Dodge, the founder of EndTAB, a group that provides trainings on technology-enabled abuse. “And as long as that happens, people will undoubtedly … continue to misuse that technology to harm others, primarily through online sexual violence, deepfake pornography and fake nude images.”

    Noelle Martin, of Perth, Australia, has experienced that reality. The 28-year-old found deepfake porn of herself 10 years ago when out of curiosity one day she used Google to search an image of herself. To this day, Martin says she doesn’t know who created the fake images, or videos of her engaging in sexual intercourse that she would later find. She suspects someone likely took a picture posted on her social media page or elsewhere and doctored it into porn.

    Horrified, Martin contacted different websites for a number of years in an effort to get the images taken down. Some didn’t respond. Others took it down but she soon found it up again.

    “You cannot win,” Martin said. “This is something that is always going to be out there. It’s just like it’s forever ruined you.”

    The more she spoke out, she said, the more the problem escalated. Some people even told her the way she dressed and posted images on social media contributed to the harassment — essentially blaming her for the images instead of the creators.

    Eventually, Martin turned her attention towards legislation, advocating for a national law in Australia that would fine companies 555,000 Australian dollars ($370,706) if they don’t comply with removal notices for such content from online safety regulators.

    But governing the internet is next to impossible when countries have their own laws for content that’s sometimes made halfway around the world. Martin, currently an attorney and legal researcher at the University of Western Australia, says she believes the problem has to be controlled through some sort of global solution.

    In the meantime, some AI models say they’re already curbing access to explicit images.

    OpenAI says it removed explicit content from data used to train the image generating tool DALL-E, which limits the ability of users to create those types of images. The company also filters requests and says it blocks users from creating AI images of celebrities and prominent politicians. Midjourney, another model, blocks the use of certain keywords and encourages users to flag problematic images to moderators.

    Meanwhile, the startup Stability AI rolled out an update in November that removes the ability to create explicit images using its image generator Stable Diffusion. Those changes came following reports that some users were creating celebrity inspired nude pictures using the technology.

    Stability AI spokesperson Motez Bishara said the filter uses a combination of keywords and other techniques like image recognition to detect nudity and returns a blurred image. But it’s possible for users to manipulate the software and generate what they want since the company releases its code to the public. Bishara said Stability AI’s license “extends to third-party applications built on Stable Diffusion” and strictly prohibits “any misuse for illegal or immoral purposes.”

    Some social media companies have also been tightening up their rules to better protect their platforms against harmful materials.

    TikTok said last month all deepfakes or manipulated content that show realistic scenes must be labeled to indicate they’re fake or altered in some way, and that deepfakes of private figures and young people are no longer allowed. Previously, the company had barred sexually explicit content and deepfakes that mislead viewers about real-world events and cause harm.

    The gaming platform Twitch also recently updated its policies around explicit deepfake images after a popular streamer named Atrioc was discovered to have a deepfake porn website open on his browser during a livestream in late January. The site featured phony images of fellow Twitch streamers.

    Twitch already prohibited explicit deepfakes, but now showing a glimpse of such content — even if it’s intended to express outrage — “will be removed and will result in an enforcement,” the company wrote in a blog post. And intentionally promoting, creating or sharing the material is grounds for an instant ban.

    Other companies have also tried to ban deepfakes from their platforms, but keeping them off requires diligence.

    Apple and Google said recently they removed an app from their app stores that was running sexually suggestive deepfake videos of actresses to market the product. Research into deepfake porn is not prevalent, but one report released in 2019 by the AI firm DeepTrace Labs found it was almost entirely weaponized against women and the most targeted individuals were western actresses, followed by South Korean K-pop singers.

    The same app removed by Google and Apple had run ads on Meta’s platform, which includes Facebook, Instagram and Messenger. Meta spokesperson Dani Lever said in a statement the company’s policy restricts both AI-generated and non-AI adult content and it has restricted the app’s page from advertising on its platforms.

    In February, Meta, as well as adult sites like OnlyFans and Pornhub, began participating in an online tool, called Take It Down, that allows teens to report explicit images and videos of themselves from the internet. The reporting site works for regular images, and AI-generated content — which has become a growing concern for child safety groups.

    “When people ask our senior leadership what are the boulders coming down the hill that we’re worried about? The first is end-to-end encryption and what that means for child protection. And then second is AI and specifically deepfakes,” said Gavin Portnoy, a spokesperson for the National Center for Missing and Exploited Children, which operates the Take It Down tool.

    “We have not … been able to formulate a direct response yet to it,” Portnoy said.

    Source link

  • Deepfake porn could be a growing problem amid AI race

    Deepfake porn could be a growing problem amid AI race

    NEW YORK — Artificial intelligence imaging can be used to create art, try on clothes in virtual fitting rooms or help design advertising campaigns.

    But experts fear the darker side of the easily accessible tools could worsen something that primarily harms women: nonconsensual deepfake pornography.

    Deepfakes are videos and images that have been digitally created or altered with artificial intelligence or machine learning. Porn created using the technology first began spreading across the internet several years ago when a Reddit user shared clips that placed the faces of female celebrities on the shoulders of porn actors.

    Since then, deepfake creators have disseminated similar videos and images targeting online influencers, journalists and others with a public profile. Thousands of videos exist across a plethora of websites. And some have been offering users the opportunity to create their own images — essentially allowing anyone to turn whoever they wish into sexual fantasies without their consent, or use the technology to harm former partners.

    The problem, experts say, grew as it became easier to make sophisticated and visually compelling deepfakes. And they say it could get worse with the development of generative AI tools that are trained on billions of images from the internet and spit out novel content using existing data.

    “The reality is that the technology will continue to proliferate, will continue to develop and will continue to become sort of as easy as pushing the button,” said Adam Dodge, the founder of EndTAB, a group that provides trainings on technology-enabled abuse. “And as long as that happens, people will undoubtedly … continue to misuse that technology to harm others, primarily through online sexual violence, deepfake pornography and fake nude images.”

    Noelle Martin, of Perth, Australia, has experienced that reality. The 28-year-old found deepfake porn of herself 10 years ago when out of curiosity one day she used Google to search an image of herself. To this day, Martin says she doesn’t know who created the fake images, or videos of her engaging in sexual intercourse that she would later find. She suspects someone likely took a picture posted on her social media page or elsewhere and doctored it into porn.

    Horrified, Martin contacted different websites for a number of years in an effort to get the images taken down. Some didn’t respond. Others took it down but she soon found it up again.

    “You cannot win,” Martin said. “This is something that is always going to be out there. It’s just like it’s forever ruined you.”

    The more she spoke out, she said, the more the problem escalated. Some people even told her the way she dressed and posted images on social media contributed to the harassment — essentially blaming her for the images instead of the creators.

    Eventually, Martin turned her attention towards legislation, advocating for a national law in Australia that would fine companies 555,000 Australian dollars ($370,706) if they don’t comply with removal notices for such content from online safety regulators.

    But governing the internet is next to impossible when countries have their own laws for content that’s sometimes made halfway around the world. Martin, currently an attorney and legal researcher at the University of Western Australia, says she believes the problem has to be controlled through some sort of global solution.

    In the meantime, some AI models say they’re already curbing access to explicit images.

    OpenAI says it removed explicit content from data used to train the image generating tool DALL-E, which limits the ability of users to create those types of images. The company also filters requests and says it blocks users from creating AI images of celebrities and prominent politicians. Midjourney, another model, blocks the use of certain keywords and encourages users to flag problematic images to moderators.

    Meanwhile, the startup Stability AI rolled out an update in November that removes the ability to create explicit images using its image generator Stable Diffusion. Those changes came following reports that some users were creating celebrity inspired nude pictures using the technology.

    Stability AI spokesperson Motez Bishara said the filter uses a combination of keywords and other techniques like image recognition to detect nudity and returns a blurred image. But it’s possible for users to manipulate the software and generate what they want since the company releases its code to the public. Bishara said Stability AI’s license “extends to third-party applications built on Stable Diffusion” and strictly prohibits “any misuse for illegal or immoral purposes.”

    Some social media companies have also been tightening up their rules to better protect their platforms against harmful materials.

    TikTok said last month all deepfakes or manipulated content that show realistic scenes must be labeled to indicate they’re fake or altered in some way, and that deepfakes of private figures and young people are no longer allowed. Previously, the company had barred sexually explicit content and deepfakes that mislead viewers about real-world events and cause harm.

    The gaming platform Twitch also recently updated its policies around explicit deepfake images after a popular streamer named Atrioc was discovered to have a deepfake porn website open on his browser during a livestream in late January. The site featured phony images of fellow Twitch streamers.

    Twitch already prohibited explicit deepfakes, but now showing a glimpse of such content — even if it’s intended to express outrage — “will be removed and will result in an enforcement,” the company wrote in a blog post. And intentionally promoting, creating or sharing the material is grounds for an instant ban.

    Other companies have also tried to ban deepfakes from their platforms, but keeping them off requires diligence.

    Apple and Google said recently they removed an app from their app stores that was running sexually suggestive deepfake videos of actresses to market the product. Research into deepfake porn is not prevalent, but one report released in 2019 by the AI firm DeepTrace Labs found it was almost entirely weaponized against women and the most targeted individuals were western actresses, followed by South Korean K-pop singers.

    The same app removed by Google and Apple had run ads on Meta’s platform, which includes Facebook, Instagram and Messenger. Meta spokesperson Dani Lever said in a statement the company’s policy restricts both AI-generated and non-AI adult content and it has restricted the app’s page from advertising on its platforms.

    In February, Meta, as well as adult sites like OnlyFans and Pornhub, began participating in an online tool, called Take It Down, that allows teens to report explicit images and videos of themselves from the internet. The reporting site works for regular images, and AI-generated content — which has become a growing concern for child safety groups.

    “When people ask our senior leadership what are the boulders coming down the hill that we’re worried about? The first is end-to-end encryption and what that means for child protection. And then second is AI and specifically deepfakes,” said Gavin Portnoy, a spokesperson for the National Center for Missing and Exploited Children, which operates the Take It Down tool.

    “We have not … been able to formulate a direct response yet to it,” Portnoy said.

    Source link

  • Opinion: Washington needs to get over its TikTok fixation | CNN

    Opinion: Washington needs to get over its TikTok fixation | CNN

    Editor’s Note: Evan Greer is an activist, writer and musician based in Boston. She’s the director of the digital rights group Fight for the Future, and a regular commentator on issues related to technology policy, LGBTQ communities and human rights. Follow her on Twitter @evan_greer or Mastodon @evangreer@mastodon.online. Read more opinion on CNN.



    CNN
     — 

    The US government is racing ahead with proposals aimed at banning TikTok, the viral video platform used by more than 150 million Americans. Officials say it’s a matter of national security, gesturing urgently toward TikTok’s parent company ByteDance and its ties to China.

    While some might be motivated by thinly-veiled xenophobia, lawmakers also rightly point to concerns about TikTok’s surveillance and capitalist business model, which vacuums up as much personal information about users as possible and then uses it to serve content that keeps us clicking, scrolling, and generating ad revenue. TikTok “spies” on us for profit. That’s not in question.

    The problem is that – while they might not be owned by a Chinese company – Instagram, YouTube, Facebook, Snapchat and Twitter all do it too, as privacy advocates have been warning for more than a decade. Banning TikTok won’t make us safer from China’s surveillance operations. Nor will it protect children, or anyone else, from getting addicted to Big Tech’s manipulative products. It’s just an ineffective solution that sounds good on TV.

    While many governments engage in internet censorship and surveillance, China certainly has one of the most sophisticated and draconian systems. A core characteristic of China’s censorship regime is the “Great Firewall,” which blocks foreign social media apps, news sites and even educational resources like Wikipedia, under the guise of protecting national security.

    As they hyperventilate about TikTok, US politicians are so eager to appear “tough on China” that they’re suggesting we build our very own Great Firewall here at home. There is a small but growing number of countries in the world so authoritarian that they block popular apps and websites entirely. It’s regrettable that so many US lawmakers want to add us to that list.

    Several of the proposals wending their way through Congress would grant the federal government unprecedented new powers to control what technology we can use and how we can express ourselves – authority that goes far beyond TikTok. The bipartisan RESTRICT Act (S. 686), for example, would enable the Commerce Department to engage in extraordinary acts of policing, criminalizing a wide range of activities with companies from “hostile” countries and potentially even banning entire apps simply by declaring them a threat to national security.

    The law is vague enough that some experts have raised concerns that it could threaten individual internet users with lengthy prison sentences for taking steps to “evade” a ban, like side-loading an app (i.e., bypassing approved app distribution channels such as the Apple store) or using a virtual private network (VPN).

    But banning TikTok isn’t just foolish and dangerous, it’s also unconstitutional. The strong free speech protections enshrined in the First Amendment bar the government from extreme actions like criminalizing an app that millions of people use to express their opinions and ideas. The US government can’t ban you from posting or watching TikTok videos any more than they can stop you from reading a foreign newspaper like the Times of India or writing an opinion piece for The Guardian.

    The Washington Post, the New York Times and CNN all have their own official TikTok accounts, as do numerous candidates for office, elected officials, academics, journalists, religious leaders and political figures. Any proposal that results in TikTok’s effective ban in the US would almost certainly fall apart under a legal challenge, as the American Civil Liberties Union and other experts have asserted. Even conservative Republican Senator Rand Paul of Kentucky agrees that banning the app would violate Americans’ right to free speech.

    A ban on TikTok wouldn’t even be effective: The Chinese government could purchase much of the same information from data brokers, which are largely unregulated in the US.

    The rush to ban TikTok – or force its sale to a US company – is a convenient distraction from what our elected officials should be doing to protect us from government manipulation and commercial surveillance: passing some basic data privacy legislation. It’s a matter of common knowledge that Instagram, YouTube, Venmo, Snapchat and most of the other apps on your phone engage in similar data harvesting business practices to TikTok. Some are even worse.

    So it’s not just TikTok. Much of what you do in the digital space on all of your devices is tracked. Companies that engage in the practice claim that they track users’ activities online in order to deliver more targeted advertising and content.

    Many companies sell the data they harvest to third parties, who sell it to fourth and fifth and sixth parties. While companies collect this data for the purpose of extracting profit and getting users hooked on their products, governments have long taken an interest.

    The only way to stop governments from weaponizing data that private companies like TikTok collect and store about us is to stop those companies from collecting and storing so much information in the first place. You can’t do that with censorship. You do that by passing a strong national data privacy law that bans companies from collecting more data about us than they need to provide us with the service we’ve requested.

    Instead of helping Big Tech get bigger by banning a major competitor, Congress should also pass antitrust legislation to crack down on anti-competitive practices. That would give concerned parents and internet users who want to ditch TikTok and Instagram better options to choose from, and reduce the power of the largest platforms, making them harder for governments to exploit and manipulate. It’s much harder for bad actors, whether they’re corporate trolls or government agents, to control information across a constellation of smaller platforms, each with their own rules and algorithms, than it is for them to poison the well when there are a tiny handful of companies controlling access to information.

    A separate concern that lawmakers and US officials have raised is the idea that the Chinese government could pressure TikTok to amplify propaganda, or otherwise change its algorithm to advance the government’s interests. It’s an argument that’s not entirely without merit.

    We know the Russian government was effective in manipulating information on Facebook during the 2016 elections. The US has historically engaged in similar conduct overseas. Consider, for example, the US history in influencing the outcomes of elections in Latin America or disinformation campaigns by US allies after the Arab Spring. State-backed disinformation campaigns are happening at a mass scale and on every major platform. We fight that by demanding more transparency and accountability, not more censorship.

    It’s a national embarrassment that we have no basic data privacy law in the United States. And it’s a travesty that we continue to allow unregulated tech monopolies to trample our rights. Every day that our elected officials spend wringing their hands and spreading moral panic about what the kids are doing on TikTok is another day we’re left vulnerable and unprotected.

    With any luck, Washington’s TikTok hysteria will fade quickly. Let’s hope the next hot new trend in the nation’s capital is passing actual laws that protect people, starting with strong privacy and antitrust legislation.

    Source link

  • Montana close to becoming 1st state to completely ban TikTok

    Montana close to becoming 1st state to completely ban TikTok

    HELENA, Mont. (AP) — Montana lawmakers moved one step closer Thursday to passing a bill to ban TikTok from operating in the state, a move that’s bound to face legal challenges but also serve as a testing ground for the TikTok-free America that many national lawmakers have envisioned.

    Montana’s proposal, which has backing from the state’s GOP-controlled legislature, is more sweeping than bans in place in nearly half the states and the U.S. federal government that prohibit TikTok on government devices.

    The House endorsed the bill 60-39 on Thursday. A final House vote will likely take place Friday before the bill goes to Republican Gov. Greg Gianforte. He has banned TikTok on government devices in Montana. The Senate passed the bill 30-20 in March.

    TikTok, which is owned by the Chinese tech company ByteDance, has been under intense scrutiny over concerns it could hand over user data to the Chinese government or push pro-Beijing propaganda and misinformation on the platform. Leaders at the FBI, CIA and numerous lawmakers of both parties have raised those concerns but haven’t presented any evidence to prove it has happened.

    Supporters of a ban point to two Chinese laws that compel companies in the country to cooperate with the government on state intelligence work. They also point out other troubling episodes, such as a disclosure by ByteDance in December that it fired four employees who accessed the IP addresses and other data of two journalists while attempting to uncover the source of a leaked report about the company.

    Congress is considering legislation that doesn’t call out TikTok, but gives the Commerce Department the ability to restrict foreign threats on tech platforms. That bill is being backed by the White House, but it has received pushback from privacy advocates, right-wing commentators and others who say the language is too broad.

    Montana Attorney General Austin Knudsen urged state lawmakers to pass the bill because he wasn’t sure Congress would act quickly on a federal ban.

    “I think Montana’s got an opportunity here to be a leader,” Knudsen, a Republican, told a House committee in March. He says the app is a tool used by the Chinese government to spy on Montanans.

    Montana’s ban would not take effect until January 2024 and would be void if Congress passes a ban or if TikTok severs its Chinese connections.

    The bill would prohibit downloads of TikTok in Montana and would fine any “entity” — an app store or TikTok — $10,000 per day for each time someone “is offered the ability” to access the social media platform or download the app. The penalties would not apply to users.

    Opponents argued the bill amounted to government overreach and that residents could easily circumvent the proposed ban by using a Virtual Private Network. A VPN encrypts internet traffic and makes it more difficult for third parties to track online activities, steal data and determine a person’s location.

    At a hearing for the bill in March, a representative from the tech trade group TechNet said app stores also “do not have the ability to geofence” apps on a state by state basis and that it would be impossible for its members, like Apple and Google, to prevent TikTok from being downloaded in Montana.

    Knudsen said Thursday the geofencing technology is used with online sports gambling apps, which he said are deactivated in states where online gambling is illegal. Ashley Sutton, TechNet’s executive director for Washington state and the northwest, said in a statement Thursday that the “responsibility should be on an app to determine where it can operate, not an app store.”

    “We’ve expressed these concerns to lawmakers. We hope the governor will work with lawmakers to amend the legislation to ensure companies that aren’t intended targets of the legislation” aren’t affected, Sutton said.

    TikTok said in a statement it will “continue to fight for TikTok users and creators in Montana whose livelihoods and First Amendment rights are threatened by this egregious government overreach.”

    Some opponents of the bill have argued the state wasn’t looking to ban other social media apps that collect similar types of data from their users.

    “We also believe this is a blatant exercise of censorship and is an egregious violation of Montanans’ free speech rights,” said Keegan Medrano with the ACLU of Montana.

    Democratic Rep. Katie Sullivan offered an amendment Thursday to broaden the ban to include any social media app that collected personal information and transferred it to a foreign adversary, such as Russia, Iran, Cuba, North Korea and Venezuela, along with China. The amendment was narrowly rejected 48-51.

    Supporters of the bill said it made sense to target TikTok first because of specific concerns with China and that it was a step in the right direction even if it doesn’t address challenges related to other social media companies.

    TikTok has been pushing back against the bill. The company, which has 150 million users in the U.S., has encouraged users in the state to speak out against the bill and hired lobbyists to do so as well. It has also purchased billboards, run full-page newspaper ads and has a website opposing Montana’s legislation. Some ads placed in local newspapers highlight how local businesses were able to use the app to drive sales.

    The bill would “show Montana doesn’t support entrepreneurs in our own state,” Shauna White Bear, who owns White Bear Moccasins, said during a March 28 hearing. She noted her business receives much more engagement on TikTok than on other social media sites.

    Knudsen, the attorney general whose office drafted the bill, said he expects the bill to face legal challenges if it passes.

    “Frankly, I think it probably needs the courts to step in here,” he said. “This is a really interesting, novel legal question that I think is ripe for some new jurisprudence.”

    The Montana bill isn’t the first blanket ban the company has faced. In 2020, then-President Donald Trump issued executive orders that banned the use of TikTok and the Chinese messaging platform WeChat. Those efforts were nixed by the courts and shelved by the Biden administration.

    TikTok continued negotiations with the administration on the security concerns tied to the app. Amid rising geopolitical tensions with China, the Biden administration more recently has threatened it could ban the app if the company’s Chinese owners don’t sell their stakes. To avoid either outcome, TikTok has been trying to sell a data safety proposal called “Project Texas” that would route all its U.S. user data to servers operated by the software giant Oracle.

    ___

    Hadero reported from New York.

    Source link

  • How TikTok Micro-Influencers Can Benefit Your Business | Entrepreneur

    How TikTok Micro-Influencers Can Benefit Your Business | Entrepreneur

    Opinions expressed by Entrepreneur contributors are their own.

    TikTok is commonly thought of as nothing more than Gen Z’s favorite platform for madcap entertainment and is certainly a hypnotizing way to follow the latest trends. But the truth is that as its popularity has expanded, so have its uses, dramatically so in fact.

    With more than a billion active users globally, the social media giant has been a veritable windfall for businesses, amplifying the reach of titans like the NBA, Netflix and Chipotle.

    John Boitnott

    Source link

  • Montana becomes first state to pass bill banning TikTok

    Montana becomes first state to pass bill banning TikTok

    Montana becomes first state to pass bill banning TikTok – CBS News


    Watch CBS News



    Montana has become the first state in the nation Friday to pass a bill banning TikTok from operating in the state. The bill now goes to the governor’s desk for his signature. It could face several legal hurdles.

    Be the first to know

    Get browser notifications for breaking news, live events, and exclusive reporting.


    Source link

  • Images of leaked classified documents were posted to at least two Discord chatrooms | CNN Politics

    Images of leaked classified documents were posted to at least two Discord chatrooms | CNN Politics



    CNN
     — 

    Images of the leaked classified documents were posted to at least two chatrooms on Discord, a social media platform popular with video gamers, according to a CNN review of Discord posts and interviews with its users.

    The leaks began months ago on the first chatroom, called Thug Shaker Central, that Jack Teixeira allegedly oversaw, multiple US officials told CNN. An FBI affidavit unsealed Friday corroborates this timeline.

    Teixeira, a 21-year-old airman with the Massachusetts Air National Guard, made his first appearance in federal court in Boston Friday morning following his arrest by the FBI in North Dighton, Massachusetts, the day before.

    According to charging documents, Teixeira held a top secret security clearance and allegedly began posting information about the documents online around December 2022, and photos of documents in January.

    It is unclear how, exactly, photos of the classified documents later ended up on a second Discord chatroom, known as End of Wow Mao Zone, in March. But four members of Wow Mao Zone told CNN that they saw another user, who does not appear to be Teixeira and who went by “Lucca,” repost some of the classified documents to that chatroom.

    CNN has been unable to contact Lucca or establish their identity. In many online forums, users cloak their identities behind screen names and are reluctant to reveal themselves, including the End of Wow Mao Zone members that CNN spoke with. But End of Wow Mao Zone chatroom members told CNN that Lucca played a key role in propagating the documents that Teixeira allegedly leaked.

    On Discord, Lucca had stature and anonymity — two things that allowed the documents to remain on the platform for weeks without repercussions. And multiple users assumed the documents were fake, that no one would be brazen enough to post US military secrets to the platform.

    Lucca was a “respected user,” one Discord user who said they knew Lucca told CNN in a text conversation, and it was expected that Lucca would take the images down. But they didn’t. Many of the chat rooms are very lightly moderated, and the images stayed up for weeks, according to the four users who spoke to CNN.

    After posting the documents, Lucca would add “fresh off the press” or something along those lines, one user added. “He would post them for attention. It was very common for him to ping everyone,” the user said.

    Discord is aware of Teixeira’s arrest and has cooperated with US law enforcement on the investigation, a Discord spokesperson told CNN in a statement Thursday night.

    “Our Terms of Service expressly prohibit using Discord for illegal or criminal purposes, which includes the sharing of documents on Discord that may be verifiably classified,” the Discord spokesperson said.

    Source link

  • Montana becomes first state to pass bill completely banning TikTok

    Montana becomes first state to pass bill completely banning TikTok

    Montana became the first state in the nation Friday to pass a bill banning TikTok from operating in the state, a move that’s bound to face legal challenges but also serve as a testing ground for the TikTok-free America that many national lawmakers have envisioned.

    The Montana House voted 54-43 to send the bill to Republican Gov. Greg Gianforte for his signature. 

    “The governor will carefully consider any bill the legislature sends to his desk,” the governor’s office told CBS News in a statement. “We will keep you apprised of the bill’s status once the governor acts on it.” 

    Gianforte has already banned TikTok on government devices in Montana. The Senate passed the bill 30-20 in March.

    The proposal backed by Montana’s GOP-controlled legislature is more sweeping than bans in place in nearly half the states and the federal government, which prohibit TikTok on government devices.

    In response to the bill’s passage, a TikTok spokesperson said in a statement to CBS News on Friday afternoon, “The bill’s champions have admitted that they have no feasible plan for operationalizing this attempt to censor American voices and that the bill’s constitutionality will be decided by the courts. We will continue to fight for TikTok users and creators in Montana whose livelihoods and First Amendment rights are threatened by this egregious government overreach.”

    TikTok, which is owned by the Chinese tech company ByteDance, has been under intense scrutiny over concerns it could hand over user data to the Chinese government or push pro-Beijing propaganda and misinformation on the platform. Leaders at the FBI, CIA and numerous lawmakers of both parties have raised those concerns but haven’t presented any evidence to prove it has happened.

    Supporters of a ban point to two Chinese laws that compel companies in the country to cooperate with the government on state intelligence work. They also point out other troubling episodes, such as a disclosure by ByteDance in December that it fired four employees who accessed the IP addresses and other data of two journalists while attempting to uncover the source of a leaked report about the company.

    Congress is considering legislation that doesn’t call out TikTok but gives the Commerce Department the ability to restrict foreign threats on tech platforms. That bill is being backed by the White House but has received pushback from privacy advocates, right-wing commentators and others who say the language is too broad.

    Montana Attorney General Austin Knudsen had urged state lawmakers to pass the bill because he wasn’t sure Congress would act quickly on a federal ban.

    “I think Montana’s got an opportunity here to be a leader,” Knudsen, a Republican, told a House committee in March. He says the app is a tool used by the Chinese government to spy on Montanans.

    Montana’s ban wouldn’t take effect until January 2024 and would be void if Congress passes a ban or if TikTok severs its Chinese connections.

    The bill would prohibit downloads of TikTok in Montana and would fine any “entity” — an app store or TikTok — $10,000 per day for each time someone “is offered the ability” to access the social media platform or download the app. The penalties wouldn’t apply to users.

    Opponents argued the bill amounted to government overreach and that residents could easily circumvent the proposed ban by using a Virtual Private Network. A VPN encrypts internet traffic and makes it more difficult for third parties to track online activities, steal data and determine a person’s location.

    At a hearing about the bill in March, a representative from the tech trade group TechNet said app stores also “do not have the ability to geofence” apps on a state-by-state basis and that it would be impossible for its members, like Apple and Google, to prevent TikTok from being downloaded in Montana.

    Knudsen said Thursday the geofencing technology is used with online sports gambling apps, which he said are deactivated in states where online gambling is illegal. Ashley Sutton, TechNet’s executive director for Washington state and the Northwest, said in a statement Thursday that the “responsibility should be on an app to determine where it can operate, not an app store.”

    “We’ve expressed these concerns to lawmakers. We hope the governor will work with lawmakers to amend the legislation to ensure companies that aren’t intended targets of the legislation” aren’t affected, Sutton said.

    Some opponents of the bill have argued the state wasn’t looking to ban other social media apps that collect similar types of data from their users.

    “We also believe this is a blatant exercise of censorship and is an egregious violation of Montanans’ free speech rights,” said Keegan Medrano with the ACLU of Montana.

    Democratic Rep. Katie Sullivan offered an amendment Thursday to broaden the ban to include any social media app that collected personal information and transferred it to a foreign adversary, such as Russia, Iran, Cuba, North Korea and Venezuela, along with China. The amendment was narrowly rejected 48-51.

    Supporters of the bill said it made sense to target TikTok first because of specific concerns with China and that it was a step in the right direction even if it doesn’t address challenges related to other social media companies.

    TikTok has been pushing back against the bill. The company, which has 150 million users in the U.S., has encouraged users in the state to speak out against the legislation and hired lobbyists to do so as well. It has also purchased billboards, run full-page newspaper ads and has a website opposing Montana’s legislation. Some ads placed in local newspapers highlight how local businesses were able to use the app to drive sales.

    The bill would “show Montana doesn’t support entrepreneurs in our own state,” Shauna White Bear, who owns White Bear Moccasins, said during a March 28 hearing. She noted her business receives much more engagement on TikTok than on other social media sites.

    Knudsen, the attorney general whose office drafted the bill, said he expects the bill to face legal challenges if it passes.

    “Frankly, I think it probably needs the courts to step in here,” he said. “This is a really interesting, novel legal question that I think is ripe for some new jurisprudence.”

    The Montana bill isn’t the first blanket ban the company has faced. In 2020, then-President Donald Trump issued executive orders that banned the use of TikTok and the Chinese messaging platform WeChat. Those efforts were nixed by the courts and shelved by the Biden administration.

    TikTok continued negotiations with the administration on the security concerns tied to the app. Amid rising geopolitical tensions with China, the Biden administration more recently has threatened it could ban the app if the company’s Chinese owners don’t sell their stakes. To avoid either outcome, TikTok has been trying to sell a data safety proposal called “Project Texas” that would route all its U.S. user data to servers operated by the software giant Oracle.

    Source link