ReportWire

Category: Technology

Technology News | ReportWire publishes the latest breaking U.S. and world news, trending topics and developing stories from around globe.

  • What is the Google Home System? Ways for it to transform your life.

    What is the Google Home System? Ways for it to transform your life.

    [ad_1]

    The Google Home system started as a simple wireless speaker that could take voice commands. However, it has become a robust system for automating your home. Controlled by the Google Home app, it allows you to ask questions, launch apps and create routines that control your home’s devices. The Google Home app is available for OS and  Android devices.  

    Google Home app allows you to ask questions and launch apps.
    (Cyberguy.com)

    What do you need to get started with Google Home? 

    You’ll need a Google Home speaker device like this Google nest mini, a Google Home app, or a Google/Gmail account to use Google Home. The Google Home app will walk you through the setup, and you’ll be able to add other information, like your location, so you can get local weather or traffic updates. You’ll also want to connect your Google Home app with other apps like Spotify or Google Photos to increase the device’s functionality.  

    What kinds of things can I do with Google Home? 

    In short, Google Home is your virtual butler, creating a world of possibilities for users like you and me: Say basic voice commands to start a favorite playlist. Suppose you have a question about absolutely anything. In that case, you can ask Google Assistant rather than look it up on your phone. You can also create a routine that gives you the weather and traffic report at a specific time each morning. 

    Home security is another popular use with Google Home. When an exterior light or motion sensor is triggered, Google Home can turn on a smart bulb inside the house, creating the impression that someone has noticed a sound outside. You can also create routines that turn on interior lights on a schedule if you’re away.  

    Google Home app allows you to control your home's devices. 

    Google Home app allows you to control your home’s devices. 
    (Cyberguy.com)

    WHY DOES MY IPHONE SCREEN KEEP DIMMING BY ITSELF?

    What types of devices work with Google Home? 

    There are hundreds of Google Home-enabled devices, with more coming on the market all the time: 

    • Smart plugs can allow users to control non-smart devices by providing or removing power. You can manage all of this through the Google Home app.
    • Google Home can control robot vacuums.
    • With smart Google Home-enabled doorbells or cameras, you can easily see who is at the door from anywhere in the house, your city or the world — essentially, wherever you have a connection.
    • Smart thermostats and doorbells allow you to manually control your home’s heating and cooling cycles or to automate it entirely using geofencing so that when the house is empty, the heating dials back.
    • Window and door locks can be locked or unlocked remotely, and cameras can record exterior and interior movement.

    Getting alerts when a device joins your Google Home Group 

    You should always be alerted when another device joins your Google Home Group, especially if you’re the only person in your household. Your Google Home Group consists of all the Google and Chromecast devices set up in your home, and you’ll always want to be aware and in control of them and not get any surprises. This way, you’ll always know if someone is trying to hack into your Google account or add another device without your consent. Here’s how to get alerts for your Google Home Group: 

    • Open your Google Home app.
    • Go to Settings > General > Notifications.
    • Toggle on People and devices.
    Google Home app is available for iOS and Android devices. 

    Google Home app is available for iOS and Android devices. 
    (Cyberguy.com)

    HOW GOOGLE MAPS LET LOVED ONES KNOW YOU’RE SAFE AT ALL TIMES

    Privacy settings 

    Your privacy settings are one of the most essential features of your Google Home device. They control what devices are connected to it, private data and even web activity. You should double-check to see which actions you have specifically authorized, and switch off anything you don’t remember consenting to. Here’s how to update your Privacy Settings: 

    • Open the Google Home app.
    • Tap on your personal icon in the top right-hand corner.
    • Select You from the menu bar.
    • Tap Your Data in the Assistant to see what information you have listed.

    Deleting some or all of your private data 

    Google Assistant saves audio recordings of every voice command Google Home has ever heard, which helps the software to understand your voice and execute future commands better. However, it isn’t critical to the device’s operation. Here’s how to delete that and all other data: 

    • Go to the Your data on the Assistant page.
    • Under Your Assistant activity, tap My Activity.
    • To the right of the search bar at the top of the page, tap the icon of three stacked dots.
    • Tap Delete activity by.
    • If you want to start over with a clean slate, tap All time. Otherwise, you can choose to delete all data collected in the last hour or last day or create a custom range, say, from the day you started using Google Home until last month.
    • The app will ask you to confirm that you want to delete your Google Assistant activity for the specified period. Tap Delete to confirm.
    • You’ll see this message: “Deletion complete.” In the lower-right corner. Tap Got it to return to the main Google Assistant Activity page.

    APPLE MESSAGES APP: 5 FEATURES TO REMEMBER

    The most extreme privacy option is: Pausing all activity 

    You can also set Google Assistant to no longer keep logs of your data; however, that may cause some hiccups with how well Google Assistant functions. If your privacy is of the utmost importance to you, and you’re willing to deal with anything from a few glitches from time to time to an entirely non-functional Google Assistant, from the main Google Assistant Activity page: 

    • Scroll down to Web & App Activity is on, and tap Change setting.
    • Turn off the toggle beside Web & App Activity.

    A screen will pop up, warning you that “pausing Web & App Activity may limit or disable more personalized experiences across Google services.” At the bottom of that screen, press Pause to stop Google from logging your activity. Note that changing this setting does not delete your personal data from Google. It only prevents Google Assistant from recording more data going forward.

    After you press Pause, you’ll be returned to the main Google Assistant Activity page. 

    The fun stuff – making calls 

    One of the coolest features of Google Home is that you can make calls without having to do any of the work. For this feature to work correctly, however, you must ensure that it is set up correctly. Here’s how to make sure that Google Home always displays your primary phone number when you request a call to be made: 

    • Open the Google Home app.
    • Go to Settings.
    • Under Google Assistant Services, tap Voice & Video Calls.
    • Select Mobile Calling.
    • If it’s not set up yet, select Your own number, and then add or change your phone number. Google will then send a verification code for you to enter on the next screen.
    • Once your number is added, make sure Your own phone number is selected underneath Your linked services.
    • Go to Contacts, and select Upload now to sync contacts from your phone.

    CLICK TO GET KURT’S CYBERGUY NEWSLETTER WITH QUICK TIPS, TECH REVIEWS, SECURITY ALERTS AND EASY HOW-TO’S TO MAKE YOU SMARTER 

    Nest Learning Thermostat displaying Google logo in smart home in Lafayette, California, January 17, 2021. 

    Nest Learning Thermostat displaying Google logo in smart home in Lafayette, California, January 17, 2021. 
    (Photo by Smith Collection/Gado/Getty Images.)

    Changing your nickname 

    A feature you can have the most fun with is having your Google Home device call you a nickname, which can be any name you want. Even if it’s something as silly as ‘Big Foot’ or ‘Mr. President,’ there’s a way for you to have your device call you anything you wish (and yes, cuss words are included). 

    • Open the Google Home app.
    • Go to Settings.
    • Scroll down, and select More settings.
    • Under You, tap Nickname.
    • Go to What should the Assistant call you? Type in the Nickname you wish to use.
    • Tap Play to hear how Google Assistant says your name. If it says the name incorrectly, try spelling phonetically instead.

    CLICK HERE TO GET THE FOX NEWS APP

    Creating a speaker group 

    There’s nothing better than jamming to your favorite music. However, you can enhance your listening experience by doubling or even tripling your sound by grouping up your devices. By grouping multiple speakers, you can make a whole-house audio system and turn it into a real party. 

    • Open the Google Home app, and tap the + sign in the upper left corner.
    • Tap Create Speaker Group, and select all the speakers you wish to add.
    • Tap Next.
    • Name the speaker group, and tap Save.

    [ad_2]

    Source link

  • How we covered the creator economy in 2022

    How we covered the creator economy in 2022

    [ad_1]

    This summer, I went straight from VidCon — the largest creator conference — to a labor journalism seminar with the Sidney Hillman Foundation. One day, I was chatting with famous TikTokers about their financial anxieties (what if they accidentally get banned from TikTok tomorrow?), and the next, I was learning about the history of American labor organizing.

    These topics are not at all unrelated: at its core, writing about creator economy is labor journalism. The creator beat is a labor beat.

    Creators are rebelling against the traditional route to making a living in artistic industries, taking control over their income to make money for themselves, rather than big media conglomerates. Consider creators like Brian David Gilbert, who built a devoted fanbase as a chaotically hilarious video producer for Polygon, the video game publication at Vox Media. Gilbert quit to work on other creative projects full time, likely because he realized that with his audience, he could make way more money independently than his media salary paid him. Then there’s YouTube channels like Defunctland and Swell Entertainment, which are basically investigative journalism outlets run by individual video producers. We see chefs building their brands by going viral on TikTok, or teachers who supplement their income by sharing educational content on Instagram. In artistic industries that notoriously underpay for the expertise that its laborers provide, YouTubers, Instagrammers and newsletter writers alike are proving that creativity is a monetizable skill — one that they deserve to make more than a living wage with.

    This belief — that the creator economy is a labor beat — has guided my coverage of the industry this year. Below, I’ve rounded up some of our best stories about the state of the creator economy.

     

    Like most teens, Chris McCarty spent a lot of time on YouTube, but they had a serious question. How can the children of influencers protect themselves when they’re too young to understand what it means to be a constant fixture in online videos? As part of their Girl Scouts Gold Award project, McCarty worked with Washington State Representative Emily Wicks to introduce a bill that seeks to protect and compensate children for their appearance in family vlogs.

    As early as 2010, amateur YouTubers realized that “cute kid does stuff” is a genre prone to virality. David DeVore, then 7, became an internet sensation when his father posted a YouTube video of his reaction to anesthesia called “David After Dentist.” David’s father turned the public’s interest in his son into a small business, earning around $150,000 within five months through ad revenue, merch sales and a licensing deal with Vizio. He told The Wall Street Journal at the time that he would save the money for his children’s college costs, as well as charitable donations. Meanwhile, the family behind the “Charlie bit my finger” video made enough money to buy a new house.

    Over a decade later, some of YouTube’s biggest stars are children who are too young to understand the life-changing responsibility of being an internet celebrity with millions of subscribers. Seven-year-old Nastya, whose parents run her YouTube channel, was the sixth-highest-earning YouTube creator in 2022, earning $28 millionRyan Kaji, a 10-year-old who has been playing with toys on YouTube since he was 4, earned $27 million from a variety of licensing and brand deals.

     

    I’m fascinated by MrBeast, but kind of in a “watching a car crash” way. MrBeast is still cruising comfortably along the highway, but I worry about the guy (… not too much. I mean. He’s doing fine). His business model just doesn’t seem sustainable to me, despite his immense riches and irreplaceable success. As he attempts to raise a unicorn-sized VC round, we’ll see if he can keep escalating his stunts without becoming yet another David Dobrik.

    Is going bigger always better? MrBeast’s business model is like a snake eating its own tail — no one is making money like he is, but no one is spending it like him either. He described his margins as “razor-thin” in a conversation with Logan Paul, since he reinvests most of his profits back into his content. His viewers expect that each video will be more impressive than the last, and from the outside looking in, it seems like it’s only a matter of time before MrBeast can no longer up the ante (and for other creators, this has led to disaster). So, if MrBeast’s business really is a unicorn — I’d wager it is — then he has two choices. Will he use the cushion of $150 million to make his business more sustainable, so he doesn’t have to keep burying himself alive? Or will he keep pushing for more until nothing is left?

     

    Speaking of David Dobrik, longtime YouTuber Casey Neistat debuted a documentary at SXSW this year about the 26-year-old YouTuber. When Neistat started working on the documentary, he wanted to capture the phenomenon that was Dobrik and his Vlog Squad, who used to be YouTube royalty. The documentary took a turn after Insider surfaced allegations of sexual assault on Dobrik’s film set — then, Dobrik nearly killed his friend Jeff Wittek in a stunt gone horribly wrong. Neistat does a brilliant job capturing the creator’s fall from grace, plus the way in which the lack of regulations on YouTube film sets can set the stage for disaster, especially when creators are incentivized to do crazier and crazier stunts to stay relevant.

    Television series like “Hype House” and “The D’Amelio Show” dedicate entire plotlines to creators’ fear of being “cancelled,” but Dobrik is still doing okay, calling into question just how far a creator has to go to lose his fans. Dobrik just opened a pizza shop in LA and has his own Discovery TV show. Wittek has had at least nine surgeries to date as a result of his accident on Dobrik’s set.

    “I think that there’s always a pursuit. It’s relevant for a musician – how do you keep your music interesting?” Neistat said. “But what makes individuals like David Dobrik different is that their pursuit is not coming out with the next song or making the next movie. Their pursuit is, how can I be more sensationalist? And that is a very, very, very dangerous pursuit, because the minute you achieve something that was crazier than the last, you then have to go past that.”

     

    The biggest open secret in short form video is that you can’t get rich on TikTok alone, because even the most viral creators earn a negligible portion of their income from the platform itself. TikTok has long been dominant in the short form scene, but YouTube Shorts could give TikTok a run for its money next year as it becomes the first platform to share ad revenue with short form creators. Ad revenue doesn’t seem that glamorous, but I couldn’t be more excited to see how this program will change the short form game in 2023.

    A big reason why TikTok and other short-form video apps haven’t unveiled a similar revenue-sharing program yet is because it’s trickier to figure out how to fairly split ad revenue on an algorithmically-generated feed of short videos. You can’t embed an ad in the middle of a video — imagine watching a 30-second video with an eight-second ad in the middle — but if you place ads between two videos, who would get the revenue share? The creator whose video appeared directly before or after it? Or, would a creator whose video you watched earlier in the feed deserve a cut too, because their content encouraged you to keep scrolling?

     

    At TechCrunch Disrupt, I interviewed OnlyFans CEO Ami Gan and Chief Strategy Officer Keily Blair about the platform’s future, especially in regard to sex workers. In large part due to the success of adult creators, OnlyFans has paid out over $8 billion to creators since 2016. For comparison, the mostly safe-for-work competitor Patreon has paid out $3.5 billion since 2013. Online sex workers are some of the savviest, highest-earning creators in the business, yet they are the most vulnerable. Changing credit card company regulations and internet privacy laws can wipe out their business, and last year, that almost happened on OnlyFans. The company said it would ban adult content, then walked back that ban — but even still, adult creators have been skeptical about how long they can keep making a living on the platform. On our stage, I asked Gan if adult content will still be on OnlyFans in 5 years. She said yes.

    OnlyFans has been putting a lot of effort into upcycling its image from an adult content subscription platform to a Patreon-like home for all kinds of creators, but it’s far from moving away from them as users. Today CEO Ami Gan of the platform confirmed that adult content will still have a home on the site in five years, and those creators can continue to make a living on it.

    The confirmation, made today on stage at TechCrunch Disrupt, is notable because of the rocky relationship OnlyFans has had with adult creators. Last year, the company announced it would ban adult content on the site after pressure from card payment companies and efforts it reportedly was making to raise outside funding. Then it abruptly suspended the decision less than a week later after an outcry from users.

    [ad_2]

    Amanda Silberling

    Source link

  • The year customer experience died

    The year customer experience died

    [ad_1]

    This was a rough year for customer experience.

    We’ve been hearing for years how important customer experience is to business, and a whole business technology category has been built around it, with companies like Salesforce and Adobe at the forefront. But due to the economy or lack of employees (perhaps both?), 2022 was a year of poor customer service, which in turn has created poor experiences; there’s no separating the two.

    No matter how great your product or service, you will ultimately be judged by how well you do when things go wrong, and your customer service team is your direct link to buyers. If you fail them in a time of need, you can lose them for good and quickly develop a bad reputation. News can spread rapidly through social media channels. That’s not the kind of talk you want about your brand.

    We’re constantly being asked for feedback about how the business did, yet this thirst for information doesn’t seem to ever connect back to improving the experience.

    And make no mistake: Your customer service is inexorably linked to the perceived experience of your customer. We’re constantly being asked for feedback about how the business did, yet this thirst for information doesn’t seem to ever connect back to improving the experience.

    Consider the poor folks who bought tickets for Southwest Airlines flights this week. One video showed airline employees had sicced the police on their own passengers. Consider that the airline admittedly screwed up, but one representative of the same airline actually called the police on passengers for being at the gate. When it comes to abusing your customers and destroying your brand goodwill, that example takes the cake.

    For too long we’ve been hearing about how data will drive better experiences, but is that data ever available to the people dealing with the customers? They don’t need data — they need help and training and guidance, and there clearly wasn’t enough of that in 2022. It seemed companies cut back on customer service to the detriment of their customers’ experience and ultimately to the reputation of the brand.

    [ad_2]

    Ron Miller

    Source link

  • How China is building a parallel generative AI universe

    How China is building a parallel generative AI universe

    [ad_1]

    The gigantic technological leap that machine learning models have shown in the last few months is getting everyone excited about the future of AI — but also nervous about its uncomfortable consequences. After text-to-image tools from Stability AI and OpenAI became the talk of the town, ChatGPT’s ability to hold intelligent conversations is the new obsession in sectors across the board.

    In China, where the tech community has always watched progress in the West closely, entrepreneurs, researchers, and investors are looking for ways to make their dent in the generative AI space. Tech firms are devising tools built on open source models to attract consumer and enterprise customers. Individuals are cashing in on AI-generated content. Regulators have responded quickly to define how text, image, and video synthesis should be used. Meanwhile, U.S. tech sanctions are raising concerns about China’s ability to keep up with AI advancement.

    As generative AI takes the world by storm towards the end of 2022, let’s take a look at how this explosive technology is shaking out in China.

    Chinese flavors

    Thanks to viral art creation platforms like Stable Diffusion and DALL-E 2, generative AI is suddenly on everyone’s lips. Halfway across the world, Chinese tech giants have also captivated the public with their equivalent products, adding a twist to suit the country’s tastes and political climate.

    Baidu, which made its name in search engines and has in recent years been stepping up its game in autonomous driving, operates ERNIE-ViLG, a 10-billion parameter model trained on a data set of 145 million Chinese image-text pairs. How does it fair against its American counterpart? Below are the results from the prompt “kids eating shumai in New York Chinatown” given to Stable Diffusion, versus the same prompt in Chinese (纽约唐人街小孩吃烧卖) for ERNIE-ViLG.

    Stable Diffusion

    ERNIE-ViLG

    As someone who grew up eating dim sum in China and Chinatowns, I’d say the results are a tie. Neither got the right shumai, which, in the dim sum context, is a type of succulent, shrimp and pork dumpling in a half-open yellow wrapping. While Stable Diffusion nails the atmosphere of a Chinatown dim sum eatery, its shumai is off (but I see where the machine is going). And while ERNIE-ViLG does generate a type of shumai, it’s a variety more commonly seen in eastern China rather than the Cantonese version.

    The quick test reflects the difficulty in capturing cultural nuances when the data sets used are inherently biased — assuming Stable Diffusion would have more data on the Chinese diaspora and ERNIE-ViLG probably is trained on a greater variety of shumai images that are rarer outside China.

    Another Chinese tool that has made noise is Tencent’s Different Dimension Me, which can turn photos of people into anime characters. The AI generator exhibits its own bias. Intended for Chinese users, it took off unexpectedly in other anime-loving regions like South America. But users soon realized the platform failed to identify black and plus-size individuals, groups that are noticeably missing in Japanese anime, leading to offensive AI-generated results.

    Aside from ERNIE-ViLG, another large-scale Chinese text-to-image model is Taiyi, a brainchild of IDEA, a research lab led by renowned computer scientist Harry Shum, who co-founded Microsoft’s largest research branch outside the U.S., Microsoft Research Asia. The open source AI model is trained on 20 million filtered Chinese image-text pairs and has one billion parameters.

    Unlike Baidu and other profit-driven tech firms, IDEA is one of a handful of institutions backed by local governments in recent years to work on cutting-edge technologies. That means the center probably enjoys more research freedom without the pressure to drive commercial success. Based in the tech hub of Shenzhen and supported by one of China’s wealthiest cities, it’s an up-and-coming outfit worth watching.

    Rules of AI

    China’s generative AI tools aren’t just characterized by the domestic data they learn from; they are also shaped by local laws. As MIT Technology Review pointed out, Baidu’s text-to-image model filters out politically sensitive keywords. That’s expected, given censorship has long been a universal practice on the Chinese internet.

    What’s more significant to the future of the fledgling field is the new set of regulatory measures targeting what the government dubs “deep synthesis tech”, which denotes “technology that uses deep learning, virtual reality, and other synthesis algorithms to generate text, images, audio, video, and virtual scenes.”As with other types of internet services in China, from games to social media, users are asked to verify their names before using generative AI apps. The fact that prompts can be traced to one’s real identity inevitably has a restrictive impact on user behavior.

    But on the bright side, these rules could lead to more responsible use of generative AI, which is already being abused elsewhere to churn out NSFW and sexist content. The Chinese regulation, for example, explicitly bans people from generating and spreading AI-created fake news. How that will be implemented, though, lies with the service providers.

    “It’s interesting that China is at the forefront of trying to regulate [generative AI] as a country,” said Yoav Shoham, founder of AI21 Labs, an Israel-based OpenAI rival, in an interview. “There are various companies that are putting limits to AI… Every country I know of has efforts to regulate AI or to somehow make sure that the legal system, or the social system, is keeping up with the technology, specifically about regulating the automatic generation of content.”

    But there’s no consensus as to how the fast-changing field should be governed, yet. “I think it’s an area we’re all learning together,” Shoham admitted. “It has to be a collaborative effort. It has to involve technologists who actually understand the technology and what it does and what it doesn’t do, the public sector, social scientists, and people who are impacted by the technology as well as the government, including the sort of commercial and legal aspect of the regulation.”

    Monetizing AI

    As artists fret over being replaced by powerful AI, many in China are leveraging machine learning algorithms to make money in a plethora of ways. They aren’t from the most tech-savvy crowd. Rather, they are opportunists or stay-home mums looking for an extra source of income. They realize that by improving their prompts, they can trick AI into making creative emojis or stunning wallpapers, which they can post on social media to drive ad revenues or directly charge for downloads. The really skilled ones are also selling their prompts to others who want to join the money-making game — or even train them for a fee.

    Others in China are using AI in their formal jobs like the rest of the world. Light fiction writers, for instance, can cheaply churn out illustrations for their works, a genre that is shorter than novels and often features illustrations. An intriguing use case that can potentially disrupt realms of manufacturing is using AI to design T-shirts, press-on nails, and prints for other consumer goods. By generating large batches of prototypes quickly, manufacturers save on design costs and shorten their production cycle.

    It’s too early to know how differently generative AI is developing in China and the West. But entrepreneurs have made decisions based on their early observation. A few founders told me that businesses and professionals are generally happy to pay for AI because they see a direct return on investment, so startups are eager to carve out industry use cases. One clever application came from Sequoia China-backed Surreal (later renamed to Movio) and Hillhouse-backed ZMO.ai, which discovered during the pandemic that e-commerce sellers were struggling to find foreign models as China kept its borders shut. The solution? The two companies worked on algorithms that generated fashion models of all shapes, colors, and races.

    But some entrepreneurs don’t believe their AI-powered SaaS will see the type of skyrocketing valuation and meteoric growth their Western counterparts, like Jasper and Stability AI, are enjoying. Over the years, numerous Chinese startups have told me they have the same concern: China’s enterprise customers are generally less willing to pay for SaaS than those in developed economies, which is why many of them start expanding overseas.

    Competition in China’s SaaS space is also dog-eat-dog. “In the U.S., you can do fairly well by building product-led software, which doesn’t rely on human services to acquire or retain users. But in China, even if you have a great product, your rival could steal your source code overnight and hire dozens of customer support staff, which don’t cost that much, to outrace you,” said a founder of a Chinese generative AI startup, requesting anonymity.

    Shi Yi, founder and CEO of sales intelligence startup FlashCloud, agreed that Chinese companies often prioritize short-term returns over long-term innovation. “In regard to talent development, Chinese tech firms tend to be more focused on getting skilled at applications and generating quick money,” he said. One Shanghai-based investor, who declined to be named, said he was “a bit disappointed that major breakthroughs in generative AI this year are all happening outside China.”

    Roadblocks ahead

    Even when Chinese tech firms want to invest in training large neural networks, they might lack the best tools. In September, the U.S. government slapped China with export controls on high-end AI chips. While many Chinese AI startups are focused on the application front and don’t need high-performance semiconductors that handle seas of data, for those doing basic research, using less powerful chips means computing will take longer and cost more, said an enterprise software investor at a top Chinese VC firm, requesting anonymity. The good news is, he argued, such sanctions are pushing China to invest in advanced technologies over the long run.

    As a company that bills itself as a leader in China’s AI field, Baidu believes the impact of U.S. chip sanction on its AI business is “limited” both in the short and longer term, said the firm’s executive vice president and head of AI Cloud Group, Dou Shen, on its Q3 earnings call. That’s because “a large portion” of Baidu’s AI cloud business “does not rely too much on the highly advanced chips.” And in cases where it does need high-end chips, it has “already stocked enough in hand, actually, to support our business in the near term.”

    What about the future? “When we look at it at a mid- to a longer-term, we actually have our own developed AI chip, so named Kunlun,” the executive said confidently. “By using our Kunlun chips [Inaudible] in large language models, the efficiency to perform text and image recognition tasks on our AI platform has been improved by 40% and the total cost has been reduced by 20% to 30%.”

    Time will tell if Kunlun and other indigenous AI chips will give China an edge in the generative AI race.

    [ad_2]

    Rita Liao

    Source link

  • Your Memories. Their Cloud.

    Your Memories. Their Cloud.

    [ad_1]

    I noticed a philosophical divide among the archivists I spoke with. Digital archivists were committed to keeping everything with the mentality that you never know what you might want one day, while professional archivists who worked with family and institutional collections said it was important to pare down to make an archive manageable for people who look at it in the future.

    “It’s often very surprising what turns out to matter,” said Jeff Ubois, who is in the first camp and has organized conferences dedicated to personal archiving.

    He brought up a historical example. During World War II, the British war office asked people who had taken coastal vacations to send in their postcards and photographs, an intelligence-gathering exercise to map the coastline that led to the selection of Normandy as the best place to land troops.

    Mr. Ubois said it’s hard to predict the future uses of what we save. Am I socking this away just for me, to reflect on my life as I age? Is it for my descendants? Is it for an artificial intelligence that will act as a memory prosthetic when I’m 90? And if so, does that A.I. really need to remember that I Googled “starbucks ice cream calorie count” one morning in January 2011?

    Pre-internet, we pared down our collections to make them manageable. But now, we have metadata and advanced search techniques to sort through our lives: timestamps, geotags, object recognition. When I recently lost a close relative, I used the facial recognition feature in Apple Photos to unearth photos of him I’d forgotten I’d taken. I was glad to have them, but should I keep all the photos, even the unflattering ones?

    Bob Clark, the director of archives at the Rockefeller Archive Center, said that the general rule of thumb in his profession is that less than 5 percent of the material in a collection is worth saving. He faulted the technology companies for offering too much storage space, eliminating the need for deliberating over what we keep.

    “They’ve made it so easy that they have turned us into unintentional data hoarders,” he said.

    The companies try, occasionally, to play the role of memory miner, surfacing moments that they think should be meaningful, probably aiming to increase my engagement with their platform or inspire brand loyalty. But their algorithmic archivists inadvertently highlight the value of human curation.

    [ad_2]

    Kashmir Hill

    Source link

  • La tecnología que invadirá nuestras vidas en 2023

    La tecnología que invadirá nuestras vidas en 2023

    [ad_1]

    No obstante, agregó que la tecnología no era algo que se volvería profundo de la noche a la mañana. Los visores inalámbricos siguen siendo pesados y se usan en interiores, lo que significa que lo más probable es que la primera versión del visor de Apple sea usado, como muchos otros que le precedieron, para videojuegos.

    En otras palabras, se seguirán produciendo muchas conversaciones sobre el metaverso y los visores virtuales (aumentados, mixtos o como quieras llamarlos) en 2023, pero lo más probable es que aún no sea el año en que estos dispositivos se volverán muy populares, dijo Carolina Milanesi, analista de tecnología de consumo en Creative Strategies, una firma de investigación.

    “Desde la perspectiva del consumidor, todavía es muy incierto qué compras con los miles de dólares que cuesta un visor”, dijo. “¿Tengo que hacer una reunión con R.V.? Con o sin piernas, no es una necesidad”.

    Tesla continuó su dominio de las ventas de vehículos eléctricos este año, pero 2023 podría ser un punto de inflexión para la industria. Las acciones de Tesla se han desplomado este año y su marca ha sido afectada desde la toma hostil de Twitter por parte de Musk. Al mismo tiempo, la competencia en el mercado se está intensificando a medida que los fabricantes de vehículos como Ford Motor, Kia, General Motors, Audi y Rivian aumentan la producción de sus autos eléctricos.

    Además, Tesla manifestó en noviembre que abriría el diseño de su conector de carga a otros autos eléctricos. Eso permitiría a los conductores de otros tipos de autos recargar sus baterías en estaciones de carga de Tesla, las cuales son mucho más abundantes que otros tipos de cargadores.

    Aunado a eso, California y Nueva York emprendieron acciones para prohibir la venta de autos de gasolina a partir de 2035. Todo esto prepara el terreno para que la industria de los autos eléctricos se vuelva mucho más grande que una sola marca en 2023.

    Twitter fue un caos durante gran parte de 2022 y es muy probable que esto continúe el año que viene. En respuesta a las reacciones negativas, este mes Musk les preguntó a sus seguidores en Twitter a través de una “encuesta” si debería renunciar como líder de la compañía. Una mayoría, alrededor de diez millones de usuarios, votaron que sí, pero Musk indicó que solo dejará el puesto cuando encuentre a alguien “lo suficientemente tonto como para asumir el cargo”.

    [ad_2]

    Brian X. Chen

    Source link

  • Fidelity slashes the value of its Twitter stake by over half

    Fidelity slashes the value of its Twitter stake by over half

    [ad_1]

    Fidelity, which was among the group of outside investors that helped Elon Musk finance his $44 billion takeover of Twitter, has slashed the value of its stake in Twitter by 56%. The recalculation comes as Twitter navigates a number of challenges, most the result of chaotic management decisions — including an exodus of advertisers from the network.

    Fidelity’s Blue Chip Growth Fund stake in Twitter was valued at around $8.63 million as of November, according to a monthly disclosure and Fidelity Contrafund notice first reported today by Axios. That’s down from $19.66 million as of the end of October.

    Macroeconomic trends are likely to blame in part. Stripe took a 28% internal valuation cut in July, while Instacart this week reportedly suffered a 75% cut to its valuation.

    But Twitter’s wishy-washy policies post-Musk clearly haven’t helped matters.

    The network’s become less stable at a technical level as of late, on Wednesday suffering outages after Musk made “significant” backend server architecture changes. Twitter recently laid off employees in its public policy and engineering department, dissolving the group responsible for weighing in on content moderation and human rights-related issues such as suicide prevention. And the company’s raised the ire of regulators after banning — and then quickly reinstating — accounts belonging to prominent journalists.

    Then again — as Axios business editor Dan Primack pointed out, appropriately in a tweet — Fidelity seems to rely heavily on public market performance where it concerns valuations. It’s quite possible that the firm doesn’t have any inside info on Twitter’s financial performance.

    Cutbacks at Twitter abound as the company approaches $1 billion in interest payments due on $13 billion in debt, all while revenue dips. A November report from Media Matters for America estimated that half of Twitter’s top 100 advertisers, which spent almost $750 million on Twitter ads this year combined, appear to no longer be advertising on the website. Twitter’s heavily pushing its Twitter Blue plan, aiming to make it a larger profit driver. But third-party tracking data suggest it’s been slow to take off.

    Some Twitter employees are bringing their own toilet paper to work after the company cut back on janitorial services, the New York Times recently reported, and Twitter has stopped paying rent for several of its offices including its San Francisco headquarters.

    Musk has attempted to save around $500 million in costs unrelated to labor, according to the aforementioned Times report, over the past few weeks shutting down a data center and launching a fire sale after putting office items up for auction in a bid to recoup costs.

    Separately, Musk’s team has reached out to investors for potential fresh investment for Twitter at the same price as the original $44 billion acquisition, according to The Wall Street Journal.

    A poll put up by Musk asking if he should step down as head of the company closed December 19 with users voting resoundingly in favor of him leaving. Musk responded several days afterward, saying he’d resign as CEO “as soon as [he found] someone foolish enough to take the job” and after that “just run the software and servers teams.”

    [ad_2]

    Kyle Wiggers

    Source link

  • How Google Maps lets loved ones know you’re safe at all times

    How Google Maps lets loved ones know you’re safe at all times

    [ad_1]

    Sharing your whereabouts with your loved ones so they know you’re safe or can call for help if you’re in danger can be very comforting to them and you, and it’s now easier than ever with the use of Google maps. Here’s how:

    CLICK TO GET KURT’S CYBERGUY NEWSLETTER WITH QUICK TIPS, TECH EVIEWS, SECURITY ALERTS AND EASY HOW-TO’S TO MAKE YOU SMARTER

    Google Maps can do much more than just show you where to go.

    WHY WINDOWS IS #1 TARGET FOR MALWARE: 2 EASY WAYS TO STAY SAFE

    How to share your Google Maps location on an iPhone, iPad, Android or web browser

    • Go to your Google Maps app (make sure your app is updated to the latest version) or log in to Google maps at Google.com/maps
    • Tap your profile picture in the top right
    • Click Location Sharing
    • Tap the “Share Location” button
    • In the first row you see, select the amount of time you’re sharing your location. If you don’t feel comfortable sharing your location indefinitely, be sure to set a time frame when you’re traveling to then expire (i.e. “for 1 hour” or “until you turn this off”).
    • In the next row, select the people with whom you want to share your location.  Note: anyone with the link you send via email or text will be able to see your name, photo, and real-time location.
    • Click Share button
    • Your contact will receive an email or text message with a link. Once clicked, your contact can view your location on a Google Map on their device.
    Here's where you can share your location with friends and loved ones.

    Here’s where you can share your location with friends and loved ones.

    How to stop sharing your location on an iPhone, iPad, Android or web browser

    • Go to your Google Maps app (make sure your app is updated to the latest version) or login in to Google maps at Google.com/maps
    • Tap your profile picture in the top right
    • Select Location Sharing
    • In the bottom row, you’ll see the contact you shared your location with
    • Click that row
    • In the next menu, in the second row, click “Stop“, to stop sharing your location.

    ARE YOU BEING STALKED? A SIMPLE SOFTWARE UPDATE CAN SAVE YOUR LIFE

    Follow these steps to share your location on Google Maps.

    Follow these steps to share your location on Google Maps.

    HOW TO RESCUE YOURSELF FROM HOLIDAY TRAVEL NIGHTMARES

    Can I share with someone who doesn’t have a Google account?

    Yes, you may share your Google Maps location with someone who does not have a Google account. The steps to going about this are just a little different.

    • On your mobile device or tablet, open the Google Maps app
    • Tap your profile picture and go to Location Sharing
    • Click Share Location
    • Click More Options
    • A Share with a link menu will pop up.  Click the “Share” button
    • Tap Copy to copy your location-sharing link
    • Paste that link in an email, text, or other messaging app and send it to whoever you wish to share your location with.

    For more Google tips, visit CyberGuy.com and search “Google” by clicking the magnifying glass icon at the top of my website.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    And while you’re on my site, be sure to subscribe to my free CyberGuy Report Newsletter by clicking the “Free newsletter” link at the top of my website.

    Copyright 2023 CyberGuy.com. All rights reserved. CyberGuy.com articles and content may contain affiliate links that earn a commission when purchases are made.

    [ad_2]

    Source link

  • Why does my iPhone screen keep dimming by itself?

    Why does my iPhone screen keep dimming by itself?

    [ad_1]

    Do you ever open up your iPhone and realize that your screen is a lot dimmer than it was an hour ago? Well, this isn’t a coincidence. It’s an automatic measure that Apple programmed into every iPhone device as a battery-saving measure. 

    The brightness will adjust by itself depending on how much light is in your surrounding environment. The less light you have around you, the dimmer your screen will get. 

    Perhaps you’d rather have your iPhone at the same brightness level at all times. Maybe you struggle with your eyesight and simply see better when your phone is at a higher brightness level regardless of if you’re in a dark place or not. 

    The good news is, there’s a way to adjust your iPhone’s brightness level so that it never switches up on you again. 

    The brightness will adjust by itself depending on how much light is in your surrounding environment. The less light you have around you, the dimmer your screen will get. 
    (Cyberguy.com)

    AT YEAR END, GREAT SAVES WORTH REMEMBERING: HOW AN IPHONE FEATURE HELPED RESCUE PEOPLE AFTER CAR CRASHES

    How do I adjust the brightness level on my iPhone

    Adjusting your brightness level is quite simple and there are two different ways to go about it. The first way goes as follows: 

    • Open your iPhone and use your finger to swipe down from the top right-hand corner of the screen
    The brightness will adjust by itself depending on how much light is in your surrounding environment. The less light you have around you, the dimmer your screen will get. 

    The brightness will adjust by itself depending on how much light is in your surrounding environment. The less light you have around you, the dimmer your screen will get. 
    (CyberGuy.com)

    • Go to the Brightness icon and slide up and down until you get to a level you’re comfortable with
    Here's how to adjust the brightness on your iPhone.

    Here’s how to adjust the brightness on your iPhone.
    (CyberGuy.com)

    The other way to adjust your brightness levels on your iPhone is as follows: 

    Go here to adjust the brightness on your iPhone.

    Go here to adjust the brightness on your iPhone.
    (CyberGuy.com)

    • Go to the brightness slider and slide it to the left or right until you get to a level you’re comfortable with
    Here's how to adjust the brightness on your iPhone.

    Here’s how to adjust the brightness on your iPhone.
    (CyberGuy.com)

    How do I keep the brightness level from automatically dimming? 

    There are two things you need to do to keep your phone brightness at the same consistent level, and both require you to adjust in your Settings app. 

    The first is by turning off Auto-Brightness (this only applies to those with iPhone 14 Pro and Pro-Max models and later) and the second is turning off True Tone

    SEND A FUN MESSAGE WITH THESE IPHONE TRICKS

    How to turn off Auto-Brightness

    How to turn off True Tone

    Here's how to turn off your iPhone's True Tone.

    Here’s how to turn off your iPhone’s True Tone.
    (CyberGuy.com)

    • Open your Settings app
    • Scroll down and select Display & Brightness
    • Underneath the brightness slider is the True Tone option. Toggle this off
    Here's the slider to turn True Tone on or off.

    Here’s the slider to turn True Tone on or off.
    (CyberGuy.com)

    What else can I do to prevent my brightness level from changing? 

    APPLE’S REPLACEMENT FOR THE PASSWORD

    The biggest thing you can watch out for so that your brightness levels remain the same is to not allow your iPhone to overheat. Even if you have Auto-Brightness and True Tone turned off, your iPhone will automatically turn the brightness level down if the device is overheating as a safety measure. 

    Your phone can overheat sometimes in just seconds if it is left in the sun or any hot location (e.g. inside a car) for too long. It can also overheat if you have a faulty battery that needs to be replaced. 

    How do I check the status of my phone battery? 

    Go to Settings 

    Scroll down and select Battery

    Here's how to check your iPhone's battery status.

    Here’s how to check your iPhone’s battery status.
    (CyberGuy.com)

    Select Battery Health & Charging 

    This screen will show the battery status level for your iPhone.

    This screen will show the battery status level for your iPhone.
    (CyberGuy.com)

    A display will appear of the health level of your battery.

    This screen shows your battery's health level.

    This screen shows your battery’s health level.
    (CyberGuy.com)

    CLICK HERE TO GET THE FOX NEWS APP

    If your maximum capacity is below 80%, Apple recommends looking into getting a new battery.

    For more Apple tips, head over to CyberGuy.com and search “Apple” and be sure to subscribe to my free CyberGuy Report Newsletter at CyberGuy.com/Newsletter. 

    [ad_2]

    Source link

  • Daily Crunch: To take the friction out of consumer messaging, more companies are entering the Matrix

    Daily Crunch: To take the friction out of consumer messaging, more companies are entering the Matrix

    [ad_1]

    To get a roundup of TechCrunch’s biggest and most important stories delivered to your inbox every day at 3 p.m. PDT, subscribe here.

    Welcome back to your daily digest of TechCrunch goodness. It is my last day with you (you’re welcome!), so Christine will be back in the Daily Crunch seat on Tuesday. Haje will not be back just yet because he is heading to Vegas as part of the team covering CES. Speaking of CES, Brian raised the curtain on what we can expect from its first full-fledged production since before COVID.

    Bye for now, folks. Safe and Happy New Year to you all. — Henry

    At the top

    • Into the Matrix: No, not that Matrix. We’re talking about the open standards-based comms protocol called Matrix that Paul went deep on. Its network doubled thanks in part to increased use by enterprises and government. Reddit is also having a go, experimenting with it for its chat feature.
    • For the fusion: Tim took a look at five startups primed to benefit from the recent breakthroughs in fusion. [TC+]
    • Alt-ChatGPT: In the wake of the response to OpenAI’s ChatGPT comes an open source equivalent. It’s called PaLM + RLHF (rolls right off the tongue, eh?), but Kyle writes that it isn’t pre-trained, which means good luck running it.
    • The Meta eyes have it: Amanda writes that Meta is getting into the eyewear business with its purchase of the Netherlands-based, smart eyewear company Luxexcel.
    • Book tracking: Aisha rounded up a list of five apps that you can use to track all that reading you’re planning to do once the clock strikes 2023.
    • Netflix vs. Hulu: Perhaps you’ve decided to cut a streaming service or two from your lineup in light of their continued price hikes. Lauren took a look at the features of Netflix and Hulu to help you make a decision.

    What to look for in a term sheet as a first-time founder

    Image Credits: syolacan (opens in a new window) / Getty Images

    Silicon Valley reporter Connie Loizos interviewed three seasoned VCs to get their best advice for novice entrepreneurs. She asked them:

    • Why should you know what’s going to be in a term sheet before you see it?
    • Which mechanism is best to use at the outset?
    • How much equity is distributed at each level of early-stage fundraising?
    • What’s a red flag in a term sheet?
    • How should founders think about valuation when it comes to that first term sheet?

    TechCrunch+ is our membership program that helps founders and startup teams get ahead of the pack. You can sign up here. Use code “DC” for a 15% discount on an annual subscription!

    Looking back and looking ahead

    We rounded up TC+ venture capital stories from a year that unfortunately saw a lot of downs. And here are a few more favorites for good measure:

    Zack and Carly took a look back at how law enforcement cracked down on cybercriminals this year. They examine the efforts of both breachers and cops to bring justice.

    Indian startups were flush with cash with record investments. Now, Manish writes, the ecosystem is struggling with tightening funding purses, layoffs and disappointing public debuts.

    [ad_2]

    Henry Pickavet

    Source link

  • Her Child’s Naked Dance Killed Her Google Account. New Appeals Path Restored It.

    Her Child’s Naked Dance Killed Her Google Account. New Appeals Path Restored It.

    [ad_1]

    When Google informed a mother in Colorado that her account had been disabled, it felt as if her house had burned down, she said. In an instant, she lost access to her wedding photos, videos of her son growing up, her emails going back a decade, her tax documents and everything else she had kept in what she thought would be the safest place. She had no idea why.

    Google refused to reconsider the decision in August, saying her YouTube account contained harmful content that might be illegal. It took her weeks to discover what had happened: Her 9-year-old eventually confessed that he had used an old smartphone of hers to upload a YouTube Short of himself dancing around naked.

    Google has an elaborate system, involving algorithmic monitoring and human review, to prevent the sharing and storing of exploitative images of children on its platforms. If a photo or video uploaded to the company’s servers is deemed to be sexually explicit content featuring a minor, Google disables the user’s account, across all of Google’s services, and reports the content to a nonprofit that works with law enforcement. Users have an opportunity to challenge Google’s action, but in the past they had no real opportunity to provide context for a nude photo or video of a child.

    Now, after reporting by The New York Times, Google has changed its appeals process, giving users accused of the heinous crime of child sexual exploitation the ability to prove their innocence. The content deemed exploitative will still be removed from Google and reported, but the users will be able to explain why it was in their account — clarifying, for example, that it was a child’s ill-thought-out prank.

    Susan Jasper, Google’s head of trust and safety operations, said in a blog post that the company would “provide more detailed reasons for account suspensions.” She added, “And we will also update our appeals process to allow users to submit even more context about their account, including to share more information and documentation from relevant independent professionals or law enforcement agencies to aid our understanding of the content detected in the account.”

    In recent months The Times, reporting on the power that technology companies wield over the most intimate parts of their users’ lives, brought to Google’s attention several instances when its previous review process appeared to have gone awry.

    In two separate cases, fathers took photos of their naked toddlers to facilitate medical treatment. An algorithm automatically flagged the images, and then human moderators deemed them in violation of Google’s rules. The police determined that the fathers had committed no crime, but the company still deleted their accounts.

    The fathers, one in California and the other in Texas, found themselves stymied by Google’s previous appeals process: At no point were they able to provide medical records, communications with their doctors or police documents absolving them of wrongdoing. The father in San Francisco eventually got six months of his Google data back, but on a thumb drive from the Police Department, which had gotten it from the company with a warrant.

    “When we find child sexual abuse material on our platforms, we remove it and suspend the related account,” a Google spokesman, Matt Bryant, said in a statement. “We take the implications of suspending an account seriously, and our teams work constantly to minimize the risk of an incorrect suspension.”

    Technology companies that offer free services to consumers are notoriously bad at customer support. Google has billions of users. Last year, it disabled more than 270,000 accounts for violating its rules against child sexual abuse material. In the first half of this year, it disabled more than it did in all of 2021.

    “We don’t know what percentage of those are false positives,” said Kate Klonick, an associate professor at St. John’s University School of Law who studies internet governance issues. Even just 1 percent would result in hundreds of appeals per month, she said. She predicted that Google would need to expand its trust and safety team to handle the disputes.

    “It seems like Google is making the right move,” Ms. Klonick said, “to adjudicate and solve for false positives. But it’s an expensive proposition.”

    Evelyn Douek, an assistant professor at Stanford Law School, said she would like Google to provide more details about how the new appeals process would work.

    “Just the establishment of a process doesn’t solve everything. The devil is in the details,” she said. “Is the new review meaningful? What is the timeline?”

    A Colorado mother eventually received a warning on YouTube saying her content violated community guidelines. Credit…YouTube

    It took four months for the mother in Colorado, who asked that her name not be used to protect her son’s privacy, to get her account back. Google reinstated it after The Times brought the case to the company’s attention.

    “We understand how upsetting it would be to lose access to your Google account, and the data stored in it, due to a mistaken circumstance,” Mr. Bryant said in a statement. “These cases are extraordinarily rare, but we are working on ways to improve the appeals process when people come to us with questions about their account or believe we made the wrong decision.”

    Google did not tell the woman that the account was active again. Ten days after her account had been reinstated, she learned of the decision from a Times reporter.

    When she logged in, she found that everything had been restored beyond the video her son had made. A message popped up on YouTube, featuring an illustration of a referee blowing a whistle and saying her content had violated community guidelines. “Because it’s the first time, this is just a warning,” the message said.

    “I wish they had just started here in the first place,” she said. “It would have saved me months of tears.”

    Jason Scott, a digital archivist who wrote a memorably profane blog post in 2009 warning people not to trust the cloud, said companies should be legally obligated to give users their data, even when an account was closed for rule violations.

    “Data storage should be like tenant law,” Mr. Scott said. “You shouldn’t be able to hold someone’s data and not give it back.”

    The mother also received an email from “The Google Team,” sent on Dec. 9.

    “We understand that you attempted to appeal this several times, and apologize for the inconvenience this caused,” it said. “We hope you can understand we have strict policies to prevent our services from being used to share harmful or illegal content, especially egregious content like child sexual abuse material.”

    Many companies besides Google monitor their platforms to try to prevent the rampant sharing of child sexual abuse images. Last year, more than 100 companies sent 29 million reports of suspected child exploitation to the National Center for Missing and Exploited Children, the nonprofit that acts as the clearinghouse for such material and passes reports on to law enforcement for investigation. The nonprofit does not track how many of those reports represent true abuse.

    Meta sends the highest volume of reports to the national center — more than 25 million in 2021 from Facebook and Instagram. Last year, data scientists at the company analyzed some of the flagged material and found examples that qualified as illegal under federal law but were “non-malicious.” In a sample of 150 flagged accounts, more than 75 percent “did not exhibit malicious intent,” said the researchers, giving examples that included a “meme of a child’s genitals being bitten by an animal” that was shared humorously and teenagers sexting each other.

    [ad_2]

    Kashmir Hill

    Source link

  • QuickVid uses AI to generate short-form videos, complete with voiceovers

    QuickVid uses AI to generate short-form videos, complete with voiceovers

    [ad_1]

    Generative AI is coming for videos. A new website, QuickVid, combines several generative AI systems into a single tool for automatically creating short-form YouTube, Instagram, TikTok and Snapchat videos.

    Given as little as a single word, QuickVid chooses a background video from a library, writes a script and keywords, overlays images generated by DALL-E 2 and adds a synthetic voiceover and background music from YouTube’s royalty-free music library. QuickVid’s creator, Daniel Habib, says that he’s building the service to help creators meet the “ever-growing” demand from their fans.

    “By providing creators with tools to quickly and easily produce quality content, QuickVid helps creators increase their content output, reducing the risk of burnout,” Habib told TechCrunch in an email interview. “Our goal is to empower your favorite creator to keep up with the demands of their audience by leveraging advancements in AI.”

    But depending on how they’re used, tools like QuickVid threaten to flood already-crowded channels with spammy and duplicative content. They also face potential backlash from creators who opt not to use the tools, whether because of cost ($10 per month) or on principle, yet might have to compete with a raft of new AI-generated videos.

    Going after video

    QuickVid, which Habib, a self-taught developer who previously worked at Meta on Facebook Live and video infrastructure, built in a matter of weeks, launched on December 27. It’s relatively bare bones at present — Habib says that more personalization options will arrive in January — but QuickVid can cobble together the components that make up a typical informational YouTube Short or TikTok video, including captions and even avatars.

    It’s easy to use. First, a user enters a prompt describing the subject matter of the video they want to create. QuickVid uses the prompt to generate a script, leveraging the generative text powers of GPT-3. From keywords either extracted from the script automatically or entered manually, QuickVid selects a background video from the royalty-free stock media library Pexels and generates overlay images using DALL-E 2. It then outputs a voiceover via Google Cloud’s text-to-speech API — Habib says that users will soon be able to clone their voice — before combining all these elements into a video.

    Image Credits: QuickVid

    See this video made with the prompt “Cats”:

    Or this one:

    QuickVid certainly isn’t pushing the boundaries of what’s possible with generative AI. Both Meta and Google have showcased AI systems that can generate completely original clips given a text prompt. But QuickVid amalgamates existing AI to exploit the repetitive, templated format of B-roll-heavy short-form videos, getting around the problem of having to generate the footage itself.

    “Successful creators have an extremely high-quality bar and aren’t interested in putting out content that they don’t feel is in their own voice,” Habib said. “This is the use case we’re focused on.”

    That supposedly being the case, in terms of quality, QuickVid’s videos are generally a mixed bag. The background videos tend to be a bit random or only tangentially related to the topic, which isn’t surprising given QuickVids being currently limited to the Pexels catalog. The DALL-E 2-generated images, meanwhile, exhibit the limitations of today’s text-to-image tech, like garbled text and off proportions.

    In response to my feedback, Habib said that QuickVid is “being tested and tinkered with daily.”

    Copyright issues

    According to Habib, QuickVid users retain the right to use the content they create commercially and have permission to monetize it on platforms like YouTube. But the copyright status around AI-generated content is … nebulous, at least presently. The U.S. Patent and Trademark Office (USPTO) recently moved to revoke copyright protection for an AI-generated comic, for example, saying copyrightable works require human authorship.

    When asked about how the USPTO decision might affect QuickVid, Habib said he believes that it only pertain to the “patentability” of AI-generated products and not the rights of creators to use and monetize their content. Creators, he pointed out, aren’t often submitting patents for videos and usually lean into the creator economy, letting other creators repurpose their clips to increase their own reach.

    “Creators care about putting out high-quality content in their voice that will help grow their channel,” Habib said.

    Another legal challenge on the horizon might affect QuickVid’s DALL-E 2 integration — and, by extension, the site’s ability to generate image overlays. Microsoft, GitHub and OpenAI are being sued in a class action lawsuit that accuses them of violating copyright law by allowing Copilot, a code-generating system, to regurgitate sections of licensed code without providing credit. (Copilot was co-developed by OpenAI and GitHub, which Microsoft owns.) The case has implications for generative art AI like DALL-E 2, which similarly has been found to copy and paste from the datasets on which they were trained (i.e., images).

    Habib isn’t concerned, arguing that the generative AI genie’s out of the bottle. “If another lawsuit showed up and OpenAI disappeared tomorrow, there are several alternatives that could power QuickVid,” he said, referring to the open source DALL-E 2-like system Stable Diffusion. QuickVid is already testing Stable Diffusion for generating avatar pics.

    Moderation and spam

    Aside from the legal dilemmas, QuickVid might soon have a moderation problem on its hands. While OpenAI has implemented filters and techniques to prevent them, generative AI has well-known toxicity and factual accuracy problems. GPT-3 spouts misinformation, particularly about recent events, which are beyond the boundaries of its knowledge base. And ChatGPT, a fine-tuned offspring of GPT-3, has been shown to use sexist and racist language.

    That’s worrisome, particularly for people who’d use QuickVid to create informational videos. In a quick test, I had my partner — who’s far more creative than me, particularly in this area —  enter a few offensive prompts to see what QuickVid would generate. To QuickVid’s credit, obviously problematic prompts like “Jewish new world order” and “9/11 conspiracy theory” didn’t yield toxic scripts. But for “Critical race theory indoctrinating students,” QuickVid generated a video implying that critical race theory could be used to brainwash schoolchildren.

    See:

    QuickVid

    Habib says that he’s relying on OpenAI’s filters to do most of the moderation work and asserts that it’s incumbent on users to manually review every video created by QuickVid to ensure “everything is within the boundaries of the law.”

    “As a general rule, I believe people should be able to express themselves and create whatever content they want,” Habib said.

    That apparently includes spammy content. Habib makes the case that the video platforms’ algorithms, not QuickVid, are best positioned to determine the quality of a video, and that people who produce low-quality content “are only damaging their own reputations.” The reputational damage will naturally disincentivize people from creating mass spam campaigns with QuickVid, he says.

    “If people don’t want to watch your video, then you won’t receive distribution on platforms like YouTube,” he added. “Producing low-quality content will also make people look at your channel in a negative light.”

    But it’s instructive to look at ad agencies like Fractl, which in 2019 used an AI system called Grover to generate an entire site of marketing materials — reputation be damned. In an interview with The Verge, Fractl partner Kristin Tynski said that she foresaw generative AI enabling “a massive tsunami of computer-generated content across every niche imaginable.”

    In any case, video-sharing platforms like TikTok and YouTube haven’t had to contend with moderating AI-generated content on a massive scale. Deepfakes — synthetic videos that replace an existing person with someone else’s likeness — began to populate platforms like YouTube several years ago, driven by tools that made deepfaked footage easier to produce. But unlike even the most convincing deepfakes today, the types of videos QuickVid creates aren’t obviously AI-generated in any way.

    Google Search’s policy on AI-generated text might be a preview of what’s to come in the video domain. Google doesn’t treat synthetic text differently from human-written text where it concerns search rankings but takes actions on content that’s “intended to manipulate search rankings and not help users.” That includes content stitched together or combined from different web pages that “[doesn’t] add sufficient value” as well as content generated through purely automated processes, both of which might apply to QuickVid.

    In other words, AI-generated videos might not be banned from platforms outright should they take off in a major way but rather simply become the cost of doing business. That isn’t likely to allay the fears of experts who believe that platforms like TikTok are becoming a new home for misleading videos, but — as Habib said during the interview — “there is no stopping the generative AI revolution.”

    [ad_2]

    Kyle Wiggers

    Source link

  • The year that brought Silicon Valley back down to earth | CNN Business

    The year that brought Silicon Valley back down to earth | CNN Business

    [ad_1]



    CNN
     — 

    On the first trading day of 2022, Apple hit a new milestone for the tech industry: the iPhone maker became the first publicly traded company to hit a $3 trillion market cap, with Microsoft and Google not far behind. As eye-popping as that valuation was, there were headlines speculating about how long it would be before Apple and its rivals topped $5 trillion.

    The tech industry, already dominant, only seemed destined to grow even bigger at the start of this year. The spread of the Omicron variant suggested a continued pandemic-fueled demand for digital goods and services, which had buoyed many tech companies. Near 0% interest rates meant startups still had easy access to the funding that had fueled their high valuations and risky ventures.

    But the year is ending on a much different note. A perfect storm of factors have forced a dizzying reality check for the once high-flying tech sector, making it one of the biggest losers of 2022.

    Over the course of the year, pandemic-era demand for many tech tools shifted; inflation soared; interest rates rose and fears of a looming recession weighed on consumer and advertiser spending, the latter of which makes up the core business of many household names in tech.

    The result was a bloodbath unlike anything the tech industry has seen in the past decade. Tech stocks plunged, amid a broader market downturn. Tens of thousands of rank-and-file tech workers lost their livelihoods amid mass layoffs, both at tech giants like Amazon and Facebook-parent Meta as well as at smaller tech companies like Lyft, Peloton and Stripe. The crypto world all but imploded. And an entire industry known for burning cash on ambitious moonshots instead started shutting down projects and announcing cost-cutting efforts.

    Even the title of world’s richest man, which previously belonged to serial tech founder Elon Musk, ended up passing to Bernard Arnault, the chairman of French luxury goods giant LVMH, after Musk’s chaotic purchase of Twitter appeared to sour investors on his car company, Tesla.

    The sharp shift in sentiment not only removed the air of invincibility for the industry; it also exposed some of its underlying myths. For years, Silicon Valley has held up its founders as visionaries who can see far into the future. But suddenly, many of its most prominent founders had to admit a harsh truth: they couldn’t even predict two years ahead.

    As Facebook founder Mark Zuckerberg put it in a memo to staff last month announcing the company would cut 11,000 employees: “Unfortunately, this did not play out the way I expected.”

    He was far from the only one in the industry caught off guard.

    When the pandemic upended the broader economy in early 2020, tech firms only seemed to grow bigger and more powerful as people were forced to live out their lives online. Facebook (now Meta) could afford to nearly double its headcount and make multi-billion-dollar bets on a future version of the internet dubbed the metaverse. Amazon similarly went on a hiring spree and doubled its fulfilment center footprint to meet the surge in online shopping demand.

    “At the start of Covid, the world rapidly moved online and the surge of e-commerce led to outsized revenue growth,” Zuckerberg wrote in his memo to staff last month. “Many people predicted this would be a permanent acceleration that would continue even after the pandemic ended. I did too, so I made the decision to significantly increase our investments.”

    Then the market shifted.

    “People are terrible at predicting the future, and we always think that what’s happening now is going to happen forever,” Angela Lee, a professor at Columbia Business School who teaches venture capital, leadership, and strategy courses, told CNN. “But the reality is that the pandemic was a black swan event, and none of us knew what would happen going forward.”

    One by one, the visionaries of Silicon Valley issued mea culpas. The founders of Stripe, Twitter and Facebook each took turns admitting they either grew their companies too quickly or were overly optimistic about pandemic-fueled growth in their sector.

    “We were much too optimistic about the internet economy’s near-term growth in 2022 and 2023 and underestimated both the likelihood and impact of a broader slowdown,” Patrick Collison, CEO of Stripe, wrote in a note to employees last month announcing 14% of the staff would be cut.

    It wasn’t only a shift in consumers living their lives offline again that hurt the industry. The tech sector was particularly pummeled by the impacts of rising interest rates this year. Silicon Valley as a whole is arguably more sensitive to interest rate hikes than other industries, as many tech companies rely on easy access to funding to pursue their ambitious projects, typically before even turning a profit.

    In a move to tame inflation, the Fed approved seven-straight rate hikes in 2022. Since the beginning of the year, the tech-heavy Nasdaq index shed more than 30% as of Dec. 21. By comparison, the Nasdaq soared more than 40% in 2020 and a further 20% in 2021. And the S&P 500’s Information Technology sector shed more than 28% this year through Dec. 21, considerably higher than the broader S&P 500’s fall of just 19% over that same period.

    Apple’s market cap now hovers just above $2 trillion. Amazon’s stock has shed some 50% year to date. And shares for Meta have been hit even harder, losing nearly two thirds of their value in 2022. Once a trillion-dollar business last year, Meta has since seen its market value drop below companies like Home Depot.

    The shift in sentiment for tech has also hit the next generation of companies that aspire to be household names.

    Global venture funding hit a nine quarter low of $74.5 billion in the third quarter of 2022, according to data from analytics firm CB Insights. This marked the largest quarterly percentage drop in a decade (34%), and a 58% decline from the investment peak reached in the fourth quarter of 2021.

    In another sign of how this played out in the startup world: more than two new unicorns (startups valued at $1 billion or more) were born on average per business day in 2021, according separate data from CB Insights. That rate dropped to a pace of less than one new unicorn for every other business day in the third quarter of 2022, per CB Insights’ most recent analysis, the lowest since the first quarter of 2020.

    Lee, who is also the founder of investing network 37 Angels, said when she met with tech founders this year, “I have said these words, which is, ‘I might have done this deal last year, but I am not going to do it now.’ And I’ve heard a lot of other people say that as well.”

    While the belt tightening might be painful for tech founders, Lee says she views it as a good thing for the tech industry overall. Many industry insiders have long said these sorts of corrections can help weed out some of the excess in the market and ensure more financially viable companies are the ones that survive.

    “Right now, there are like a lot of headlines that are just like, ‘The sky is falling, the end is near,’ and the way that I describe it is more of like a return to normalcy,” said Lee, noting that most charts tracking VC spending (from the number of mega-rounds to the number of IPOs) had a huge hump in 2020 and 2021 when interest rates were low, and now these charts are starting to look like how they did in 2019.

    “I would just call it like a ‘return to sanity,’ versus like, ‘the sky is falling,’” Lee said. “I do not think venture is cratering, or the tech industry is cratering as an industry.”

    But for now, at least, there appears to be no end in sight to the pain for Silicon Valley and those who work in it.

    In his own memo acknowledging job cuts at Amazon, CEO Andy Jassy said the layoffs at Amazon, reported to total some 10,000 roles, would continue into 2023. At a conference last month, he called the earlier hiring spree a “lesson” for everybody.

    [ad_2]

    Source link

  • Google to pay Indiana $20 million to resolve privacy suit

    Google to pay Indiana $20 million to resolve privacy suit

    [ad_1]

    INDIANAPOLIS — Google will pay Indiana $20 million to resolve the state’s lawsuit against the technology giant over allegedly deceptive location tracking practices, state Attorney General Todd Rokita announced.

    Rokitas filed a separate lawsuit against Google when negotiations between the company and a coalition of state attorneys general stalled, he said. Those states agreed to a $391.5 million settlement with the company in November.

    As a result of the separate lawsuit, Indiana received about twice as much money as it would have under the deal with the 40 states in the coalition, Rokita said in his announcement Thursday.

    “This settlement is another manifestation of our steadfast commitment to protect Hoosiers from Big Tech’s intrusive schemes,” Rokita said.

    States began investigating after a 2018 Associated Press story that found that Google continued to track people’s location data even after they opted out of such tracking by disabling a feature the company called “location history.”

    Google did not admit to any wrongdoing as part of the deal with Indiana. An email seeking comment was sent Friday to Google’s press office.

    Indiana’s lawsuit alleged Google uses location data to build detailed user profiles and target ads. It alleged that the company has deceived and misled users about its practices since at least 2014.

    Rokita said he sued Google because even a limited amount of location data can expose a person’s identity and routines. Such data can be used to infer personal details such as political or religious affiliation, income, health status or participation in support groups — as well as major life events such as marriage and the birth of children, he said.

    [ad_2]

    Source link

  • There’s now an open source alternative to ChatGPT, but good luck running it

    There’s now an open source alternative to ChatGPT, but good luck running it

    [ad_1]

    The first open source equivalent of OpenAI’s ChatGPT has arrived, but good luck running it on your laptop — or at all.

    This week, Philip Wang, the developer responsible for reverse-engineering closed-sourced AI systems including Meta’s Make-A-Video, released PaLM + RLHF, a text-generating model that behaves similarly to ChatGPT. The system combines PaLM, a large language model from Google, and a technique called Reinforcement Learning with Human Feedback — RLHF, for short — to create a system that can accomplish pretty much any task that ChatGPT can, including drafting emails and suggesting computer code.

    But PaLM + RLHF isn’t pre-trained. That is to say, the system hasn’t been trained on the example data from the web necessary for it to actually work. Downloading PaLM + RLHF won’t magically install a ChatGPT-like experience — that would require compiling gigabytes of text from which the model can learn and finding hardware beefy enough to handle the training workload.

    Like ChatGPT, PaLM + RLHF is essentially a statistical tool to predict words. When fed an enormous number of examples from training data — e.g., posts from Reddit, news articles and e-books — PaLM + RLHF learns how likely words are to occur based on patterns like the semantic context of surrounding text.

    ChatGPT and PaLM + RLHF share a special sauce in Reinforcement Learning with Human Feedback, a technique that aims to better align language models with what users wish them to accomplish. RLHF involves training a language model — in PaLM + RLHF’s case, PaLM — and fine-tuning it on a dataset that includes prompts (e.g., “Explain machine learning to a six-year-old”) paired with what human volunteers expect the model to say (e.g., “Machine learning is a form of AI…”). The aforementioned prompts are then fed to the fine-tuned model, which generates several responses, and the volunteers rank all the responses from best to worst. Finally, the rankings are used to train a “reward model” that takes the original model’s responses and sorts them in order of preference, filtering for the top answers to a given prompt.

    It’s an expensive process, collecting the training data. And training itself isn’t cheap. PaLM is 540 billion parameters in size, “parameters” referring to the parts of the language model learned from the training data. A 2020 study pegged the expenses for developing a text-generating model with only 1.5 billion parameters at as much as $1.6 million. And to train the open source model Bloom, which has 176 billion parameters, it took three months using 384 Nvidia A100 GPUs; a single A100 costs thousands of dollars.

    Running a trained model of PaLM + RLHF’s size isn’t trivial, either. Bloom requires a dedicated PC with around eight A100 GPUs. Cloud alternatives are pricey, with back-of-the-envelope math finding the cost of running OpenAI’s text-generating GPT-3 — which has around 175 billion parameters — on a single Amazon Web Services instance to be around $87,000 per year.

    Sebastian Raschka, an AI researcher, points out in a LinkedIn post about PaLM + RLHF that scaling up the necessary dev workflows could prove to be a challenge as well. “Even if someone provides you with 500 GPUs to train this model, you still need to have to deal with infrastructure and have a software framework that can handle that,” he said. “It’s obviously possible, but it’s a big effort at the moment (of course, we are developing frameworks to make that simpler, but it’s still not trivial, yet).”

    That’s all to say that PaLM + RLHF isn’t going to replace ChatGPT today — unless a well-funded venture (or person) goes to the trouble of training and making it available publicly.

    In better news, several other efforts to replicate ChatGPT are progressing at a fast clip, including one led by a research group called CarperAI. In partnership with the open AI research organization EleutherAI and startups Scale AI and Hugging Face, CarperAI plans to release the first ready-to-run, ChatGPT-like AI model trained with human feedback.

    LAION, the nonprofit that supplied the initial dataset used to train Stable Diffusion, is also spearheading a project to replicate ChatGPT using the newest machine learning techniques. Ambitiously, LAION aims to build an “assistant of the future” — one that not only writes emails and cover letters but “does meaningful work, uses APIs, dynamically researches information and much more.” It’s in the early stages. But a GitHub page with resources for the project went live a few weeks ago.

    [ad_2]

    Kyle Wiggers

    Source link

  • Librarians Are Meeting Younger Readers Where They Are: TikTok

    Librarians Are Meeting Younger Readers Where They Are: TikTok

    [ad_1]

    “It is our job to select, acquire, describe, make accessible and circulate preserved knowledge,” Drabinski added. “That’s the whole project. So as technology changes the ways things are circulated, we change with it.”

    Librarians can also use TikTok to spread trustworthy information on a platform rife with manipulated content. “It is a space that requires critical information literacy,” said Jessie Loyer, an academic librarian in Calgary, Alberta who posts about topics including digital sovereignty and repatriation on TikTok under the handle @IndigenousLibrarian.

    “Librarians have always been involved in helping people figure out what is real, what is relevant,” Loyer added. So TikTok, she said, is “a necessary space to be in, and a useful tool.”

    Not everyone is on board with the idea of librarians posting on TikTok. Some library directors and boards find some TikTok accounts unprofessional, Vickers said. And some librarians are ambivalent about encouraging young people to use the platform. Elizabeth Miller, 22, a youth services librarian at the Rehoboth Beach Public Library in Rehoboth Beach, Del., said that while TikTok has potential for helping people make friends and explore hobbies, the app isn’t always a healthy environment for adolescents.

    But others, including librarians at Kankakee Public Library, find that TikTok lets them engage with the community in person, too. The library often collaborates with local figures, including the mayor. “He’s always excited to do it,” said Greer, who helps make the videos. The library has plans to make TikToks with cheerleaders and the drama club at the local high school next year.

    “We may not make them readers this year or next year,” said her colleague Mary Bass, 30, the youth services assistant supervisor and lead at the Kankakee library. “But they’ll know that we’re here as they grow up.”

    [ad_2]

    Lora Kelley

    Source link

  • What to look for in a term sheet as a first-time founder

    What to look for in a term sheet as a first-time founder

    [ad_1]

    Securing funding is a stressful endeavor, but it doesn’t have to be. We recently sat down with three VCs to figure out the best way to go about spinning up an investing network from scratch and negotiating the first term sheet.

    Earlier this week, we featured the first part of that conversation with James Norman of Black Operator Ventures, Mandela Schumacher-Hodge Dixon of AllRaise, and Kevin Liu of both Techstars and Uncharted Ventures.

    In part two, the investors cover more specifics about what to ask for in a term sheet and red flags you should look out for.

    (Editor’s note: This interview has been edited lightly for length and clarity.)


    Why should you know what’s going to be in a term sheet before you see it?

    Mandela Schumacher-Hodge Dixon: Do not wait until you get a term sheet to start going back and forth. The term sheet should be a reflection of what was already verbally agreed upon, including the valuation. Don’t wait until you get that legal agreement in your inbox to begin pushing back, because it’s really annoying, and it starts to affect how they feel about you.

    I’ve even seen investors pull the term sheet. No one is bulletproof, but you really want to be as bulletproof as possible in every stage of this. That requires preparation and clear communication.

    James Norman: As you plan out your whole fundraising process, lean into it and start to see what the market is thinking, you want to have a bottom line in terms of what you’re willing to accept. At some point, you may need to capitulate, but be convinced about [that bottom line] and have a reasoning for it.

    VCs are trying to invest in leaders, so they know there’s going to be a power dynamic here. How you manage that and move things forward [impacts] how they think you’re going to do other things like hire employees and land customers.

    Which mechanism is best to use at the outset?

    Norman: Once you get the term sheet, the game has really begun.

    Regarding terms, you want to make sure that you’re getting an agreement that is at parity with the level you’re at with your company. You don’t want to end up with an angel investor trying to give you some Series A Preferred docs or anything of that nature.

    If you have a pre-seed or seed-stage startup, 99% of time, you should be using a SAFE (a Simple Agreement for Future Equity agreement that Y Combinator devised in 2013). It’s got all the standard language that you need; no one can argue with it. [If they do], be like, “Go talk to Y Combinator about that.”

    [ad_2]

    Connie Loizos

    Source link

  • Apple Messages app: 5 features to remember

    Apple Messages app: 5 features to remember

    [ad_1]

    Apple’s Messages app now lets users do a lot more than just text and share media. 

    With the latest iOS updates, even more functions are available, expanding customers’ capabilities. 

    When you next use your iPhone, iPad or Mac, here are five functions to remember. 

    1. You can add your Memoji

    Tap the Memoji button and swipe right to add a new one. Apple lets users customize features, including skin ton, hair, eyes and more. 

    6 AMAZING NEW THINGS AN IPHONE CAN DO WITH THIS IOS UPDATE

    The Memoji automatically becomes sticker packs that live in the keyboard and can be used in Messages, Mail and some third-party apps.

    To use an animated Memoji, tap the Memoji button, pick your Memoji, hit the red button and record for up to 30 seconds. 

    Select another Memoji you created to choose a different Memoji with the same recording.

    Craig Federighi, senior vice president of software engineering at Apple Inc., speaks during the Apple Worldwide Developers Conference in San Jose, California, June 4, 2018. 
    (David Paul Morris/Bloomberg via Getty Images)

    2. You can edit a sent message

    With iOS 16 or later. iPadOS 16 or later or macOS Ventura, Apple users can edit or unsend text messages.

    Should the recipient device have an earlier version of iOS, they receive follow-up messages with the preface “Edited to” and the new message in quotation marks. 

    SMS messages cannot be unsent or edited.

    Recently sent messages can be undone for up to two minutes after sending a message. 

    Touch and hold the message bubble and then tap Undo Send.

    A note confirming that you unsent the message appears in both conversation transcripts: the sender’s and the recipient’s.

    Apple's Messages icon displayed on a phone screen is seen in this illustration photo taken in Krakow, Poland on August 26, 2021. 

    Apple’s Messages icon displayed on a phone screen is seen in this illustration photo taken in Krakow, Poland on August 26, 2021. 
    (Photo Illustration by Jakub Porzycki/NurPhoto via Getty Images)

    When unsending a message, the user is notified that the recipient may still see the original message in the message transcript.

    TOP TECH ‘DEATHS’ OF 2022: 5 TO REMEMBER

    A sent message can be edited up to five times within 15 minutes of sending it. 

    Just touch and hold the message bubble and tap Edit. 

    The message is marked as Edited in the conversation transcript.

    3. You can mention someone

    With iOS 14 and iPadOS 14 and later, you can reply directly to a specific message and use mentions to call attention to certain messages and people.

    The Apple logo is seen above the entrance of the Apple Store in Tokyo Oct. 20, 2022.

    The Apple logo is seen above the entrance of the Apple Store in Tokyo Oct. 20, 2022.
    (Photo by Stanislav Kogiku/SOPA Images/LightRocket via Getty Images)

    Open a conversation, type a contact’s name, and then tap the name when it appears. 

    Conversely, type @ followed by their name. 

    “Depending on your contact’s settings, a mention can notify them even if they’ve muted the conversation. To change this notification setting, go to Settings > Messages, then turn Notify Me on or off,” Apple advises.

    4. You can get back messages that were deleted

    With iOS 16, iPadOS 16.1, or later, you can recover individual messages or full conversations that were deleted.

    CLICK HERE TO GET THE FOX NEWS APP 

    Tap Edit, and Show Recently Deleted, choose the conversations with the messages you want to restore and then hit Recover. 

    Tape Recover Message or Recover [Number] Messages. 

    5. You can filter out messages from unknown senders

    Go to Settings, Messages, scroll down to Message Filtering and then turn on Filter Unknown Senders.

    When the setting is turned on, you can only see messages from people who are not in your contacts when you go to Filters and then Unknown Senders.

    [ad_2]

    Source link

  • Meta acquires Luxexcel, a smart eyewear company

    Meta acquires Luxexcel, a smart eyewear company

    [ad_1]

    As Meta faces antitrust scrutiny over its acquisition of VR fitness developers Within, the tech giant is making another acquisition. Meta confirmed to TechCrunch that it is purchasing Luxexcel, a smart eyewear company headquartered in the Netherlands. The terms of the deal, which was first reported in the Belgian paper De Tijd, have not been disclosed.

    Founded in 2009, Luxexcel uses 3D printing to make prescription lenses for glasses. More recently, the company has focused its efforts on smart lenses, which can be printed with integrated technology like LCD displays and holographic film.

    “We’re excited that the Luxexcel team has joined Meta, deepening the existing partnership between the two companies,” a Meta spokesperson told TechCrunch. It’s rumored that Meta and Luxexcel had already worked together on Project Aria, the company’s augmented reality (AR) research initiative.

    In September 2021, Meta unveiled the Ray-Ban Stories, a pair of smart glasses that can take photos and videos, or make hands-free, voice-controlled calls using Meta platforms like WhatsApp and Facebook. By absorbing Luxexcel, Meta will likely leverage the company’s technology to produce prescription AR glasses, a product that has long been anticipated to come out of Meta’s billions of dollars of investment into its Reality Labs. However, a report this summer stated that Meta was scaling back its plans for consumer-grade AR glasses, which were initially slated for 2024. Meta did not comment on these rumors at the time.

    When building its AR and VR products, Meta’s corporate strategy has been to acquire smaller companies that are building top technology in the field. Even Meta’s flagship headset, the Quest, comes from its acquisition of Oculus in 2014. Given the FTC’s attempts to block Meta’s purchase of Within, it’s possible that the purchase of Luxexcel could spark the same scrutiny.

    [ad_2]

    Amanda Silberling

    Source link

  • Despite myriad flaws, US remains top spot for Black startup founders seeking VC dollars

    Despite myriad flaws, US remains top spot for Black startup founders seeking VC dollars

    [ad_1]

    Despite, well, everything, the U.S. is still the best place in the world for Black startup founders to raise money. The check sizes are bigger, the market more mature, the ambition oversized. There are more funds, more options, more opportunities, more, more, more.

    It’s quite easy to harp on the dismal funding and often discriminatory treatment that Black founders receive in the U.S. Through the haze, though, the reality is that the heart of the American Dream is still beating.

    For example, Lotanna Ezeike, a serial founder, said he’s looking to fundraise for his new startup in the U.S., despite raising more than $1 million for his U.K.-based fintech, XPO.

    “Across the pond in the U.K., thinking tends to be very limited, especially around the seed stage,” he said, adding that a seed in the U.K. is a pre-seed or family round in the U.S.

    “I think this is because of how small the U.K. is compared to other regions, so the mind can only dream so big. It’s a spiral really — less wealth, less capital, fewer ideas that become unicorns.”

    Cephas Ndubueze, who is from Germany, echoed similar sentiments. He said he still looks to the U.S. for venture funds for his startup because there are more success stories of Black founders in the U.S. than in Europe, meaning a greater chance of him finding his own path compared to Germany.

    “I can definitely say the U.S. is a better environment for Black founders,” he told TechCrunch. “Why? More diverse investors in the U.S. More investors are investing in nontraditional businesses. More institutional investors are providing ticket sizes from $100,000 to $500,000 in the idea stage, more opportunities to build a founder network, and more investors that have already invested in Black founders in the past.”

    While the reception of Black founders may appear warmer in the U.S., the numbers show more of the same. (France and Germany do not track race data, though founders and venture capitalists interviewed by TechCrunch revealed anecdotal evidence of persistent racism in both markets.) As an ironic result, founders look to the U.S. for networking opportunities.

    [ad_2]

    Dominic-Madori Davis

    Source link