ReportWire

Tag: TECH

  • Streamlining Your Tech Stack to Unleash Maximum Efficiency | Entrepreneur

    Streamlining Your Tech Stack to Unleash Maximum Efficiency | Entrepreneur

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    Amid escalating economic uncertainty due to inflation and a potential downturn, companies are increasingly gravitating toward cost-cutting measures and investment reduction. However, a more sustainable solution could lie in streamlining processes and bolstering operational efficiencies.

    Most businesses look to reduce spending, with the software and digital technology sectors heavily targeted. But this isn’t the only option. Rather than reducing your investments in this area, you could look more closely at your current activities in the digital arena, to first identify and then address inefficiencies. With this approach, you could reduce costs without actually hampering your capacity to serve your customers’ needs or scale up operations.

    Related: This Tech Leader Breaks Down How You Can Avoid Business Disaster With This Often-Overlooked Tactic

    Completing your tech stack audit

    Your tech stack audit should classify each technology according to use:

    • Marketing
    • Sales
    • Customer success
    • Ops and analytics

    This type of breakdown will facilitate a more organized approach to testing and evaluation. This necessary audit helps determine the overall return on your investment of each tool and how it contributes to the company’s operations, seamless internal and customer communication and objectives.

    Furthermore, the data you collect and analyze should shed light on the most important metrics, such as sales velocity and conversion rates.

    Your tech stack includes the software, web applications and tools needed to construct functional websites — for your customers and you.

    To deliver great results, your digital interfaces and applications should be fast, well-organized and straightforward. One or two poorly chosen technologies can hinder performance by introducing inefficiencies that can cost both time and money, and by making it more difficult to complete sales, or collect and process essential data.

    Greater diversification in the tasks required by their users have caused tech stacks of today to become increasingly byzantine in their structures, resulting in an increase of both the types of digital devices needed to access online portals and in the volume of data that sites are now required to process. Businesses with an online presence are expected to do more in the digital environment than before..

    This increases the likelihood of inefficiencies developing due to incompatibilities with software that doesn’t integrate smoothly with other technologies in your stack. For optimum efficiency and integration, your first step should be to evaluate each technology.

    Your tech stack audit should classify each technology according to whether it’s used for marketing, business development, analytics, customer support or sales. This type of breakdown will facilitate a more organized approach to testing and evaluation, as your evaluators go through each category one at a time.

    Finding the inefficiencies that sabotage your business

    There may be multiple inefficiencies hidden deep within your tech stack. In your relentless efforts to uncover obstacles that might negatively impact profitability, here are some critical aspects of your business operation you should be evaluating.

    Related: Your Tech Employees Are Your Most Potent Reputational Tool as Your Firm Recruits

    Data collection

    You can choose a set of tools that functions in a perfectly coordinated manner, but it doesn’t benefit you if the right data are not being collected, analyzed and communicated.

    The data you collect and analyze should shed light on the most important metrics, such as sales velocity and conversion rates. If you aren’t getting actionable data that helps you streamline your operations or strengthen your relationships with your customers, you may need to eliminate some tools from your stack or replace them with newer technologies.

    Business processes

    If you haven’t already done so, you should add software to your tech stack that can automatically calculate prices for the customized products and services you provide. Trying to handle CPQ (configure-price-quote) responsibilities the old-fashioned way, via human calculation, will take too much time and increase the chances of miscalculation.

    Customer retention services

    You need to differentiate your strategic customers (your most loyal and active supporters) from those who are less committed. While the customer service you provide should be diversified and personalized to meet the needs of both constituencies, you shouldn’t waste time and money on services aimed at low-value customers — a negative return on investment is highly likely.

    Marketing

    You need to analyze your current marketing efforts carefully and with a skeptical, even critical eye. You need to know if you’re getting an acceptable return on investment, which means reaching your targeted audience with a message that resonates. If your ROI is lacking, your present marketing strategies should be sent packing.

    CRM capabilities

    Your customer relationship management (CRM) platform is the hub of your operations, and as such, you can’t afford to choose or keep a product that doesn’t fully support the implementation of your business plan.

    You shouldn’t assume that any CRM platforms like HubSpot, Salesforce or Zoho are automatically right for everyone. And while one, all three or others may very well be, you should carefully evaluate the strengths and the weaknesses of your current CRM choice and be prepared to switch to another option with capabilities that are more closely aligned with your unique and vital needs.

    Scalability

    Your technology ecosystem should always be scalable, no matter how rapidly your growth occurs. Your software should be able to accommodate increases in customer engagement and sales volume without creating logjams or bottlenecks that require technological additions, subtractions or substitutions.

    Related: Want Tech Workers to Stick Around Longer? Think 30 Years — Not 3.

    Greater efficiency will give you the edge

    The kind of exhaustive tech evaluation proposed here is not for the faint of heart. It’s a meticulous and comprehensive process that will take time, effort and extraordinary attention to detail, whether you hand the responsibility over to an in-house vetting team or contract outside experts to perform the job for you.

    What you’re really doing when you undertake such a process is signaling a revolution in your way of thinking. A comprehensive review of your technology choices can help you refine, retool or redevelop your go-to-market (GTM) strategies without cutting your investment in your business, which you now realize is the best way to survive difficult times. The changes you make afterward will give you a leg up on the competition, providing you with a real opportunity to scale up while others are only thinking about scaling back.

    [ad_2]

    Catherine Mandungu

    Source link

  • How to Ensure Tech Doesn’t Overshadow Your Brand’s Human Touch | Entrepreneur

    How to Ensure Tech Doesn’t Overshadow Your Brand’s Human Touch | Entrepreneur

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    Tech is powering business, but it isn’t enough on its own to generate success. Creating a personal touch still remains one of the most potent and effective ways for a brand to stand out from the crowd.

    The problem is that companies have a tendency to drift from the “human touch” over time. Without a deliberate effort to stay relatable, they quickly sink into a state of cold-hearted activity and bottom-line calculations.

    With recent AI and automation technology exploding in popularity, it begs the question: How can you fit all of these new tech tools into your business plans without losing your human touch?

    Let’s dive into a few of the best ways that I’ve found companies can get the most out of bleeding-edge tech without losing sight of the human experience along the way.

    Related: The Human Touch: What It Takes To Maintain Meaningful Client Relationships In A World Driven By Artificial Intelligence

    1. Embrace a human-first approach

    In SEO (search engine optimization), marketers use things like keywords, linking and back-end activity on a website to ensure that their content is driving organic traffic to their company’s site. This optimizes online content to ensure it ranks well in search engines.

    While this is all technically focused, it’s critical that SEO experts remember to put the reader first as they craft their content. If they make ranking high in search engines the top priority, it can lead to confusing text that doesn’t meet a reader’s needs. To put it another way, SEO experts must prioritize the readers (i.e., users/customers) first and the search engines that point those readers toward their content second.

    The same principle applies to any application of technology in business. You should never prioritize tech tools as an end unto themselves. Instead, they should have a clear benefit that helps you serve your target audience better.

    In marketing, this is referred to as human-to-human marketing. In customer service, a consumer-centric approach is essential. And when I say essential, I’m not exaggerating.

    During the pandemic, when companies were using tech tools hand over fist to maintain their connection with customers, CGS polled thousands of consumers. The goal was to see how they were fairing in online customer service interactions.

    The results were telling. More than a third of respondents (37.8% in the U.S. and 39.1% in the U.K.) didn’t just say that having a human element in the interaction was important. They said that an opportunity to connect with a human agent was a top three requirement of leaving the interaction happy.

    If you want to embrace tech without losing the human touch, start by prioritizing the customer over the tech in every situation.

    Related: In An Era Of Artificial Intelligence, There’s Always Room For Human Intelligence

    2. Don’t let tech hide your humanity

    Technology can have an endless number of applications for a brand. You can use it to speed up invoicing, track customer profiles, forecast sales cycles, the list goes on.

    One thing that tech should never be, though, is a cop-out. You should never use tech to avoid an issue, like dealing with an unhappy customer.

    In fact, in the CGS survey listed above, nearly half of those asked wanted brands to be more transparent about how to get help from a human. They didn’t want to have to put in extra work to find a way around an automated customer service system.

    When technology is implemented purely to save a buck or make an internal problem go away at the expense of the customer, it can quickly become a misuse of its value. Remember, tech should always enhance the customer experience. This can be a direct influence or an indirect one, but it should always be a factor.

    It’s one thing to use technology to make things easier or reduce your overhead — if doing that hides your brand’s humanity, though, you should look for a better option.

    3. Use tech to make human-centered activities easier

    One of the simplest ways to lean into tech effectively is to use it to make “human touch” business activities more optimized (and, by extension, easier to invest in and sustain).

    For instance, a branded podcast is a great way to showcase a brand’s humanity. It requires real-life recordings from the experts and individuals behind your products and services.

    That said, a podcast is a lot of work. That can make it hard for companies to pull the trigger on a recurring show. This is a perfect opportunity for tech to help — and in more ways than one.

    One example is the numerous AI and automation tools available to streamline the podcast production process. Simon Hodgkins points out that AI is already using NLP (natural language processing) to automate transcription services.

    The CMO adds that AI can also help with post-production. It can remove background noise and fix irregularities in sound levels. AI can even generate ancillary items, such as show notes and social posts.

    You can go even further by having an amplified marketing tool develop a longer blog article based on an episode that dives deeper into a topic. You will still want a human editor to give your content a once-over, but the overall process is faster, more affordable and expands your reach.

    Related: The Rise (and Rise) of Branded Podcasts

    Technology and our humanity don’t have to be mutually exclusive aspects of business. With a little forethought, it’s easy to get the two to overlap.

    Embrace a human-first mindset, and evaluate tech to ensure that it is helping rather than hiding your brand’s human touch. If you can maintain that mindset, you can find countless ways to use tech to give you a competitive advantage in your industry.

    [ad_2]

    Lindsay Tjepkema

    Source link

  • Google Salary Data Leak Shows Employee Compensation in 2022 | Entrepreneur

    Google Salary Data Leak Shows Employee Compensation in 2022 | Entrepreneur

    [ad_1]

    Tech jobs have long been in the top ranks among the highest-paying industries, but some companies really shell out the dough for their engineers.

    In 2022, the median total compensation for Google employees was $279,802, according to leaked internal data from the company reviewed by Business Insider. Among the highest-paying positions at Google, software engineers led the pack with a maximum base salary of $718,000 last year.

    The data comes from an internal spreadsheet shared among Google employees comprised of information from over 12,000 U.S. workers for 2022, covering positions like software engineers, business analysts, and salespeople. While software engineers had the highest base salary, maximum equity, and bonuses, all of the top 10 highest-paying positions in engineering, business, and sales had maximum base salaries well into six figures.

    According to the report, Google employees’ earnings go beyond base salaries and include options and bonuses. The maximum equity a software engineer could obtain was $1.5 million in 2022.

    Related: These Are the Highest Paid CEOs — And 9 Make More Than $100 Million a Year, According to a New Report

    As far as where Google’s 2022 salary stacks up against other tech giants, the median base pay trails behind Meta ($296,320) but is well above Salesforce ($199,130) and Adobe ($170,679), according to data collected by MyLogIQ and analyzed by The Wall Street Journal.

    Here’s a look at the top 10 highest base salaries at Google across all industries at the company, per Insider’s report. The data is limited to U.S. full-time employees and does not include salaries from Alphabet’s Other Bets ventures, such as Waymo and Verily. Also, not all employees disclosed their equity and bonus data.

    Top 10 Highest Base Salaries at Google in 2022:

    1. Software engineer: $718,000

    2. Engineering manager (software engineering): $400,000

    3. Enterprise direct sales: $377,000

    4. Legal corporate counsel: $320,000

    5. Sales strategy: $320,000

    6. UX design: $315,000

    7. Government affairs & public policy: $312,000

    8. Research scientist: $309,000

    9. Cloud sales: $302,000

    10. Program manager: $300,000

    You can see the full list, here.

    Related: Google and Meta Execs Rake in Big Bonuses Despite Industry-Wide Layoffs

    [ad_2]

    Madeline Garfinkle

    Source link

  • Chip wars: How ‘chiplets’ are emerging as a core part of China’s tech strategy

    Chip wars: How ‘chiplets’ are emerging as a core part of China’s tech strategy

    [ad_1]

    July 13 (Reuters) – The sale of struggling Silicon Valley startup zGlue’s patents in 2021 was unremarkable except for one detail: The technology it owned, designed to cut the time and cost for making chips, showed up 13 months later in the patent portfolio of Chipuller, a startup in China’s southern tech hub Shenzhen.

    Chipuller purchased what is referred to as chiplet technology, a cost efficient way to package groups of small semiconductors to form one powerful brain capable of powering everything from data centers to gadgets at home.

    The previously unreported technology transfer coincides with a push for chiplet technology in China that started about two years ago, according to a Reuters analysis of hundreds of patents in the U.S. and China and dozens of Chinese government procurement documents, research papers and grants, local and central government policy documents and interviews with Chinese chip executives.

    Industry experts say chiplet technology has become even more important to China since the U.S. barred it from accessing advanced machines and materials needed to make today’s most cutting edge chips, and now largely underpins the country’s plans for self-reliance in semiconductor manufacturing.

    “U.S.-China competition is on the same starting line,” Chipuller chairman Yang Meng said about chiplet technology in an interview with Reuters. “In other (chip technologies) there is a sizeable gap between China and the United States, Japan, South Korea, Taiwan.”

    Barely mentioned before 2021, Chinese authorities have highlighted chiplets more frequently in recent years, according to a Reuters review. At least 20 policy documents from local to central governments referred to it as part of a broader strategy to increase China’s capabilities in “key and cutting-edge technologies”.

    “Chiplets have a very special meaning for China given the restrictions on wafer fabrication equipment,” said Charles Shi, a chip analyst for brokerage Needham. “They can still develop 3D stacking or other chiplet technology to work around those restrictions. That’s the grand strategy, and I think it might even work.”

    Beijing is rapidly exploiting chiplet technology in applications as diverse as artificial intelligence to self-driving cars, with entities from tech giant Huawei Technologies to military institutions exploring its use.

    More major investments in the area are on the way, according to a review of corporate announcements.

    CHINA’S CHIPLET ADVANTAGE

    Chiplets, or small chips, can be the size of a grain of sand or bigger than a thumbnail and are brought together in a process called advanced packaging.

    It is a technology the global chip industry has increasingly embraced in recent years as chip manufacturing costs soar in the race to make transistors so small they are now measured in the number of atoms.

    Bonding chiplets tightly together can help make more powerful systems without shrinking the transistor size as the multiple chips can work like one brain.

    Apple’s high-end computer lines use chiplet technology, as do Intel and AMD’s more powerful chips.

    About a quarter of the global chip packaging and testing market sits in China, according to Dongguan Securities.

    While some say this gives China an advantage in leveraging chiplet technology, Chipuller chairman Yang cautioned the proportion of China’s packaging industry that could be considered advanced was “not very big”.

    Under the right conditions, chiplets that are personalised according to the needs of the customer can be completed quickly, in “three to four months, this is the unique advantage China holds,” according to Yang.

    Needham’s Shi said according to import data published by China’s customs agency, China’s purchase of chip packaging equipment soared to $3.3 billion in 2021 from its previous high of $1.7 billion in 2018, although last year it fell to $2.3 billion with the chip market downturn.

    Since early 2021 research papers on chiplets started surfacing by researchers of the Chinese military People’s Liberation Army and universities it runs, and state-run and PLA-affiliated laboratories are looking to use chips made using domestic chiplet technology according to six tenders published over the past three years.

    Public documents by the government also show millions of dollars worth of grants to researchers specializing in chiplet technology, while dozens of smaller companies have sprouted throughout China in recent years to meet domestic demand for advanced packaging solutions like chiplets.

    CHIPLETS ON THE TABLE

    Against the backdrop of escalating U.S.-China tension, Chinese company Chipuller acquired 28 patents either owned by zGlue or invented by people whose names are on zGlue’s patents, according to an analysis using IP management technology firm Anaqua’s Acclaim IP database.

    The acquisition was through a two-step transfer, first through British Virgin Islands-registered North Sea Investment Co Ltd, according to documents seen by Reuters and confirmed by Yang.

    The Committee on Foreign Investment in the United States (CFIUS), a powerful Treasury-led committee that reviews transactions for potential threats to U.S. security, did not respond to a Reuters request for comment about whether such sales would require their approval.

    CFIUS lawyers Laura Black at Akin’s Trade Group, Melissa Mannino at BakerHostetler and Perry Bechky at Berliner Corcoran & Rowe say patent sales alone would not necessarily give CFIUS authority over the deal, as it depends whether the assets purchased constitute a U.S. business.

    Representative Mike Gallagher, an influential lawmaker whose select committee on China has pressed the Biden administration to take tougher stances on China, told Reuters zGlue’s case highlights the “urgent need to reform CFIUS”.

    “(People’s Republic of China) entities should not be able to act with impunity to take advantage of distressed U.S. firms to transfer their IP to China,” he said in an emailed statement.

    Chipuller’s Yang said zGlue’s lawyer communicated with both CFIUS and the Department of Commerce to ensure the sale to North Sea would not fall foul of export controls.

    These discussions did not include mention of Chipuller or the possibility of a Chinese entity ending up in possession of the patents, according to a Chipuller spokesperson.

    “Everything was done very transparently and in accordance with (U.S.) law,” Yang said.

    Yang said he considered himself a founder of zGlue as he became an investor in the company in 2015, soon after its formation, and later became a director and chairman.

    CFIUS visited zGlue offices in 2018 to conduct an investigation because the company’s largest non-U.S. investor, Yang, was from China, the chairman said.

    “So we have spent a lot of time communicating with CFIUS,” Yang said, adding that Chipuller currently does not supply any Chinese military or U.S.-sanctioned entities.

    Chipuller isn’t the only firm with chiplet technology.

    Huawei, China’s tech and chip design giant that has been put on the U.S.’s most restricted list, has been actively filing chiplet patents.

    Huawei published over 900 chiplet-related patent applications and grants last year in China, up from 30 in 2017, according to Anaqua’s director of analytics solutions Shayne Phillips.

    Huawei declined to comment.

    Reuters identified over a dozen announcements over the past two years for new factories or expansions of existing ones from companies using chiplet technology in manufacturing across China’s tech sector, representing an investment totalling over 40 billion yuan.

    They include domestic giants TongFu Microelectronics (002156.SZ) and JCET Group (600584.SS), as well as fast-growing startups such as Beijing ESWIN Technology Group, which spent 5.5 billion yuan on a factory for its chiplet-focused subsidiary that began operating in April.

    One article published in May by an outlet run by China’s Ministry of Industry and Information Technology (MIIT) urged big Chinese tech firms the use of domestic packaging companies such as TongFu to help build China’s self-sufficiency in computing power.

    “Use Chiplet technology to break through the United States’ siege of my country’s advanced process chips,” it said.

    MIIT did not respond to a request for comment.

    Chipuller chairman Yang puts it this way: “Chiplet technology is the core driving force for the development of the domestic semiconductor industry,” he said on the company’s official WeChat channel. “It is our mission and duty to bring it back to China.”

    ($1 = 7.2205 Chinese yuan renminbi)

    Reporting by Jane Lanhee Lee and Eduardo Baptista; Additional reporting by Echo Wang and Stephen Nellis; editing by Kenneth Li, Brenda Goh and Lincoln Feast.

    Our Standards: The Thomson Reuters Trust Principles.

    Reports on global trends in computing from covering semiconductors and tools to manufacture them to quantum computing. Has 27 years of experience reporting from South Korea, China, and the U.S. and previously worked at the Asian Wall Street Journal, Dow Jones Newswires and Reuters TV. In her free time, she studies math and physics with the goal …

    [ad_2]

    Source link

  • 8 AI Trends and Predictions for the Next Decade | Entrepreneur

    8 AI Trends and Predictions for the Next Decade | Entrepreneur

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    As a technology enthusiast and business leader, I have been keenly observing the rapid growth and adoption of ChatGPT over the past few months. Keeping aside the debate around various moral dilemmas associated with artificial intelligence (AI) tools such as ChatGPT, all I can say at this point is that they are going to transform industries and revolutionize the way we live and work. At the risk of sounding a tad presumptuous, I believe AI is not going to replace humans — just like the internet never took over the human world despite so many people raising the alarm that it would.

    So, instead of feeling all doom and gloom, the optimist in me is looking ahead to the next 10 years to understand what it is going to be like in a place we have never been before. It is essential to identify the emerging trends that will shape the future of AI. From advancements in machine learning and robotics to the ethical implications of AI, I would like to delve deep into the exciting possibilities and potential challenges that lie ahead.

    Related: The 3 Biggest Artificial Intelligence (AI) Trends in 2023

    1. Reinforcement learning and self-learning systems

    Reinforcement learning, a branch of machine learning, holds great promise for the future of AI. It involves training AI systems to learn through trial and error and get rewarded for doing something well. As algorithms become more sophisticated, we can expect AI systems to develop the ability to not only learn but get exponentially better at learning and improving without explicit human intervention, leading to significant advancements in autonomous decision-making and problem-solving.

    AI is also going to greatly help people who want to self-learn using the latest technology aids available to them. Going back to my earlier observation about ChatGPT, this AI model is capable of generating ideas and answering simple to complex questions. However, it requires precise prompts and clear instructions to perform optimally. When it comes to honing self-learning skills, it becomes essential for individuals to first develop the ability to provide such prompts and instructions. When done right, there are endless possibilities to garner knowledge by training the brain on how to distill problems into their essence and think with clarity in order to find the best solutions.

    2. AI in healthcare

    The healthcare sector is likely to benefit a lot from advancements in AI in the coming years. Predictive analytics, machine learning algorithms and computer vision can help diagnose diseases, personalize treatment plans and improve patient outcomes. AI-powered chatbots and virtual assistants can boost patient engagement and expedite administrative processes. I am hopeful that the integration of AI in healthcare will lead to more accurate diagnoses, cost savings and improved access to quality care.

    3. Autonomous vehicles

    The autonomous vehicle industry has already made significant progress, and the next decade will likely witness their widespread adoption. AI technologies such as computer vision, deep learning and sensor fusion will continue to improve the safety and efficiency of self-driving cars.

    4. AI and cybersecurity

    Technology is a double-edged sword, especially when it comes to dealing with bad actors. AI-driven cybersecurity systems are adept at finding and eliminating cyber threats by analyzing large volumes of data and detecting anomalies. In addition, these systems can provide a faster response time to minimize any potential damage caused by a breach. However, with similar technology being used by both defenders and attackers, safeguarding the AI systems themselves might turn out to be a major concern.

    Related: The Future Founder’s Guide to Artificial Intelligence

    5. AI and employment

    The impact of AI on the employment sector appears to be a fiercely debated topic with no clear consensus. According to a recent Pew Research Center survey, 47% of people think AI would perform better than humans at assessing job applications. However, a staggering 71% of people are against using AI to make final hiring decisions. While 62% think that AI will have a significant impact on the workforce over the next two decades, only 28% are concerned that they might be personally affected.

    While AI might take over some jobs, it is also expected to create new job opportunities. Many current AI tools, including ChatGPT, cannot be fully relied on for context or accuracy of information; there must be some human intervention to ensure correctness. For example, when a company decides to reduce the number of writers in favor of ChatGPT, it will also have to hire editors who can carefully examine the AI-generated content to make sure it makes sense.

    6. Climate modeling and prediction

    AI can enhance climate modeling and prediction by analyzing vast amounts of climate data and identifying patterns and trends. Machine learning algorithms can improve the accuracy and granularity of climate models, helping us understand the complex interactions within the Earth’s systems. This knowledge enables better forecasting of natural disasters, extreme weather events, sea-level rise and long-term climate trends. As we look ahead, AI can enable policymakers and communities to make informed decisions and develop effective climate action plans.

    7. Energy optimization and efficiency

    AI can optimize energy consumption and enhance the efficiency of renewable energy systems. Machine learning algorithms analyze energy usage patterns, weather data and grid information to improve energy distribution and storage. AI-powered smart grids balance supply and demand, reducing transmission losses and seamlessly integrating renewable energy sources. This maximizes clean energy utilization, reduces greenhouse gas emissions and lessens our dependence on fossil fuels.

    8. Smart resource management

    AI can revolutionize resource management by optimizing resource allocation, minimizing waste and improving sustainability. For example, in water management, AI algorithms can analyze data from sensors and satellite imagery to predict water scarcity, optimize irrigation schedules and identify leakages. AI-powered systems can also optimize waste management, recycling and circular economy practices, leading to reduced resource consumption and a more sustainable use of materials.

    Related: AI Isn’t Evil — But Entrepreneurs Need to Keep Ethics in Mind As They Implement It

    Ethical considerations

    As AI becomes more integrated into our lives, prioritizing ethical considerations becomes paramount. Privacy, bias, fairness and accountability are key challenges that demand attention. Achieving a balance between innovation and responsible AI practices necessitates collaboration among industry leaders, policymakers and researchers. Together, we must establish frameworks and guidelines to protect human rights and promote social well-being.

    [ad_2]

    Nish Parikh

    Source link

  • Musk’s Twitter rate limits could undermine new CEO, ad experts say

    Musk’s Twitter rate limits could undermine new CEO, ad experts say

    [ad_1]

    July 3 (Reuters) – Elon Musk’s move to temporarily cap how many posts Twitter users can read on the social media site could undermine efforts by new CEO Linda Yaccarino to attract advertisers, marketing industry professionals said.

    Musk announced Saturday that Twitter would limit how many tweets per day various accounts can read, to discourage “extreme levels” of data scraping and system manipulation.

    Users posted screenshots in reply, showing they were unable to see any tweets, including tweets on the pages of corporate advertisers, after hitting the limit.

    Ad industry veterans said the move creates an obstacle for Yaccarino, the former NBCUniversal advertising chief who started last month as Twitter’s CEO.

    Yaccarino has sought to repair relationships with advertisers who pulled away from the site after Musk bought it last year, the Financial Times reported last week.

    The limits are “remarkably bad” for users and advertisers already shaken by the “chaos” Musk has brought to the platform, Mike Proulx, research director at Forrester, said on Sunday.

    “The advertiser trust deficit that Linda Yaccarino needs to reverse just got even bigger. And it cannot be reversed based on her industry credibility alone,” he said.

    Lou Paskalis, the founder of advertising consultancy AJL Advisory and former marketing boss at Bank of America, said Yaccarino is Musk’s “last best hope” to salvage ad revenue and the company’s value.

    “This move signals to the marketplace that he’s not capable of empowering her to save him from himself,” he said.

    Under the new cap, unverified accounts were initially limited to 600 posts a day with new unverified accounts limited to 300. Verified accounts could read 6,000 posts a day, Musk said in a post on the site.

    Twitter logo and a photo of Elon Musk are displayed through magnifier in this illustration taken October 27, 2022. REUTERS/Dado Ruvic/Illustration

    Hours later, he said the cap was raised to 10,000 posts per day for verified users, 1,000 per day for unverified and 500 posts per day for new unverified users.

    A Twitter spokesperson did not reply to requests for comment and inquiries about how long the restrictions will last on Sunday.

    Capping how much users can view could be “catastrophic” for the platform’s ad business, said Jasmine Enberg, principal analyst at Insider Intelligence.

    “This certainly isn’t going to make it any easier to convince advertisers to return. It’s a hard sell already to bring advertisers back,” she said.

    Olivia Wedderburn, an executive at creative agency TMW Unlimited, said she was advising her clients to “stop investing in Twitter immediately,” because the platform was turning away heavily engaged users, which she said is the “sole reason” to advertise on Twitter.

    The limit came soon after Twitter began requiring users to log into an account on the social media platform to view tweets, which Musk called a “temporary emergency measure” to combat data scraping.

    Musk had earlier expressed displeasure with artificial intelligence firms like OpenAI, the owner of ChatGPT, for using Twitter’s data to train their large language models.

    Platforms including Reddit and major news media organizations have complained about AI companies using their information to train AI models as some have sought fees.

    Kai-Cheng Yang, researcher at Indiana University in Bloomington, said that the limits appeared to be effective in blocking third parties, including search engines, from scraping Twitter data like before.

    “It might still be possible, but the methods would be much more sophisticated and much less efficient,” he said.

    Reporting by Jody Godoy in New York, Sheila Dang in Dallas, Akash Sriram in Bengaluru and Martin Coulter in London; editing by Burton Frierson, Nick Zieminski and Marguerita Choy

    Our Standards: The Thomson Reuters Trust Principles.

    Jody Godoy

    Thomson Reuters

    Jody Godoy reports on banking and securities law. Reach her at jody.godoy@thomsonreuters.com

    [ad_2]

    Source link

  • Influencer Andrew Tate to stay under house arrest, court rules

    Influencer Andrew Tate to stay under house arrest, court rules

    [ad_1]

    BUCHAREST, June 23 (Reuters) – Internet personality Andrew Tate will remain under house arrest in Romania for another 30 days from the end of June pending trial on charges of human trafficking, a Bucharest court ruled on Friday.

    Tate was indicted on Tuesday along with his brother Tristan and two Romanian female suspects for human trafficking, rape and forming a criminal gang to sexually exploit women.

    They are under house arrest pending an investigation into abuses against seven women whom prosecutors say were lured through false claims of relationships, accusations the suspects have denied.

    The four suspects were held in police custody from Dec. 29 until March 31 before a Bucharest court put them under house arrest, which prosecutors on Tuesday sought to extend.

    The Tate brothers are citizens of the United States and Britain. Andrew Tate, a self-described misogynist, built up a following of millions on social media, promoting his own lavish lifestyle in posts which critics say denigrate women.

    The court needs to approve preventative restrictive measures such as house arrest every 30 days. It held a hearing on Wednesday and said it would rule on Friday.

    “We’re not the first affluent wealthy men who have been unfairly attacked,” Tate told reporters on Wednesday after the hearing. “I love this country, I’m going to stay here regardless no matter what and I look forward to being found innocent at the end of everything.”

    The trial will not start immediately. Under Romanian law, the case gets sent to the Bucharest court’s preliminary chamber, where a judge has 60 days to inspect the case files to ensure legality.

    Trafficking of adults carries a prison sentence of up to 10 years, as does rape.

    Prosecutors also said they were investigating the four suspects in a separate ongoing case on allegations of money laundering, witness tampering, and child and adult trafficking.

    Reporting by Luiza Ilie and Octav Ganea; Editing by Alan Charlish and Peter Graff

    Our Standards: The Thomson Reuters Trust Principles.

    [ad_2]

    Source link

  • Indian PM Modi wraps up Washington trip with appeal to tech CEOs

    Indian PM Modi wraps up Washington trip with appeal to tech CEOs

    [ad_1]

    WASHINGTON, June 23 (Reuters) – Indian Prime Minister Narendra Modi met with U.S. and Indian technology executives in Washington on Friday, the final day of a state visit where he agreed new defense and technology cooperation and addressed challenges posed by China.

    U.S. President Joe Biden rolled out the red carpet for Modi on Thursday, declaring after about 2-1/2 hours of talks that their countries’ economic relationship was “booming.” Trade has more than doubled over the past decade.

    Biden and Modi gathered with CEOs including Apple’s (AAPL.O) Tim Cook, Google’s (GOOGL.O) Sundar Pichai and Microsoft’s (MSFT.O) Satya Nadella.

    Also present were Sam Altman of OpenAI, NASA astronaut Sunita Williams, and Indian tech leaders including Anand Mahindra, chairman of Mahindra Group, and Mukesh Ambani, chairman of Reliance Industries, the White House said.

    “Our partnership between India and the United States will go a long way, in my view, to define what the 21st century looks like,” Biden told the group, adding that technological cooperation would be a big part of that partnership.

    Observing that there were a variety of tech companies represented at the meeting from startups to well established firms, Modi said: “Both of them are working together to create a new world.”

    Modi, who has appealed to global companies to “Make in India,” will also address business leaders at the Kennedy Center for Performing Arts.

    The CEOs of top American companies, including FedEx (FDX.N), MasterCard (MA.N) and Adobe (ADBE.O), are expected to be among the 1,200 participants.

    NOT ‘ABOUT CHINA’

    The backdrop to Modi’s visit is the Biden administration’s attempts to draw India, the world’s most populous country at 1.4 billion and its fifth-largest economy, closer amid its growing geopolitical rivalry with Beijing.

    Modi did not address China directly during the visit, and Biden only mentioned China in response to a reporter’s question, but a joint statement included a pointed reference to the East and South China Seas, where China has territorial disputes with its neighbors.

    Farwa Aamer, director for South Asia at the Asia Society Policy Institute, in an analysis note described that as “a clear signal of unity and determination to preserve stability and peace in the region.”

    Alongside agreements to sell weapons to India and share with it sensitive military technology, announcements this week included several investments from U.S.-firms aimed at spurring semiconductor manufacturing in India and lowering its dependence on China for electronics.

    White House national security spokesperson John Kirby said the challenges presented by China to both Washington and New Delhi were on the agenda, but insisted the visit “wasn’t about China.”

    “This wasn’t about leveraging India to be some sort of counterweight. India is a sovereign, independent state,” Kirby said at a news briefing, adding that Washington welcomes India becoming “an increasing exporter of security” in the Indo-Pacific.

    “There’s a lot we can do in the security front together. And that’s really what we’re focused on,” Kirby said.

    Some political analysts question India’s willingness to stand up to Beijing over Taiwan and other issues, however. Washington has also been frustrated by India’s close ties with Russia while Moscow wages war in Ukraine.

    DIASPORA TIES

    Modi attended a lunch on Friday at the State Department with Vice President Kamala Harris, the first Asian American to hold the No. 2 position in the White House, and Secretary of State Antony Blinken.

    In a toast, Harris spoke of her Indian-born late mother, Shyamala Gopalan, who came to the United States at age 19 and became a leading breast cancer researcher.

    “I think about it in the context of the millions of Indian students who have come to the United States since, to collaborate with American researchers to solve the challenges of our time and to reach new frontiers,” Harris said.

    Modi praised Gopalan for keeping India “close to her heart” despite the distance to her new home, and called Harris “really inspiring.”

    On Friday evening, Modi will address members of the Indian diaspora, many of whom have turned out at events during the visit to enthusiastically fete him, at times chanting “Modi! Modi! Modi!” despite protests from others.

    Activists said Biden had failed to strongly call out what they describe as India’s deteriorating human rights record under Modi, citing allegations of abuse of Indian dissidents and minorities, especially Muslims. Modi leads the Hindu nationalist Bharatiya Janata Party (BJP) and has held power since 2014.

    Biden said he had a “straightforward” discussion with Modi about issues including human rights, but U.S. officials emphasize that it is vital for Washington’s national security and economic prosperity to engage with a rising India.

    Asked on Thursday what he would do to improve the rights of minorities including Muslims, Modi insisted “there is no space for any discrimination” in his government.

    “There is no end to data that shows Modi is lying about minority abuse in India, and much of it can be found in the State Department’s own India country reports, which are scathing on human rights,” said Sunita Viswanath, co-founder Hindus for Human Rights, an advocacy group.

    Reporting by Steve Holland, Simon Lewis and Jeff Mason; additional reporting by Trevor Hunnicutt, Doina Chiacu, David Brunnstrom and Kanishka Singh; Editing by Don Durfee and Grant McCool

    Our Standards: The Thomson Reuters Trust Principles.

    Jeff Mason

    Thomson Reuters

    Jeff Mason is a White House Correspondent for Reuters. He has covered the presidencies of Barack Obama, Donald Trump and Joe Biden and the presidential campaigns of Biden, Trump, Obama, Hillary Clinton and John McCain. He served as president of the White House Correspondents’ Association in 2016-2017, leading the press corps in advocating for press freedom in the early days of the Trump administration. His and the WHCA’s work was recognized with Deutsche Welle’s “Freedom of Speech Award.” Jeff has asked pointed questions of domestic and foreign leaders, including Russian President Vladimir Putin and North Korea’s Kim Jong Un. He is a winner of the WHCA’s “Excellence in Presidential News Coverage Under Deadline Pressure” award and co-winner of the Association for Business Journalists’ “Breaking News” award. Jeff began his career in Frankfurt, Germany as a business reporter before being posted to Brussels, Belgium, where he covered the European Union. Jeff appears regularly on television and radio and teaches political journalism at Georgetown University. He is a graduate of Northwestern University’s Medill School of Journalism and a former Fulbright scholar.

    [ad_2]

    Source link

  • Only 1 in 3 African women have access to the internet–compared with half of men. The cost to the continent’s economy could be in the billions

    Only 1 in 3 African women have access to the internet–compared with half of men. The cost to the continent’s economy could be in the billions

    [ad_1]

    During a trip to Ghana, Tanzania, and Zambia last month, Vice President Kamala Harris announced more than $1 billion in public and private investments to close Africa’s digital divide–with a particular focus on expanding access to girls and women. That might seem like a niche goal. In fact, it will not only expand opportunity for millions, but also have far-reaching ripple effects on health, growth, stability, and resilience across a region of increasing strategic importance.

    Improving women’s access to digital technologies and skills is crucial to ensure they can fully participate in and contribute to today’s economy. Yet, only one in three African women uses the internet today, compared to almost half of men. Women on the continent are also 30% less likely than men to own a smartphone.

    This lack of access hinders women’s entrepreneurship and deprives society of their talents and innovations.

    The internet, for instance, was crucial in helping Fafape Ama Etsa Foe establish E90 Ghana, a sustainable farm in Accra that uses sawdust to grow mushrooms. Sawdust, a byproduct of the woodworking industry, is typically burned, which pollutes the air and can lead to health problems, including cancer. E90 Ghana uses it to produce healthy and nutritious food instead, simultaneously improving the environment and increasing the local food system’s resilience to climate change.

    Ms. Foe, who is locally known as the “Mushroom Queen” and recently met with Vice President Harris to discuss the economic importance of empowering women, told me the internet helped her research mushroom farming techniques, challenges, and opportunities. Today, it also allows her to reach more clients and keep costs down. “I am connected with all my regular clients on WhatsApp and Telegram, where I take their orders and supply them smoothly without delay,” she says. “These digital tools helped me to prevent postharvest losses, which used to account for as high as 25% of annual revenue.”

    Ms. Foe believes improving digital connectivity will foster entrepreneurship among women on the continent by expanding access to information and financing opportunities: “Bridging the digital gender gap will help women, especially to market their products and also come out with new innovative products.”

    It will also benefit their families, communities, and society at large. Indeed, investments in internet infrastructure grow the economy as a whole. The World Bank estimates that expanding broadband penetration by 10% in low- and middle-income economies yields a 1.4% increase in real per capita GDP. And according to the U.N. Women’s Gender Snapshot 2022 report, women’s exclusion from the digital economy has cost low- and middle-income countries $1 trillion in GDP over the previous decade already–and the cost could grow to $1.5 trillion by 2025 if nothing is done to close the gap.

    Whispa Health is another example of a company founded by a woman that would not be possible without reliable internet access. It is a Nigeria-based app that gives users – mostly women and younger people – access to information about their sexual and reproductive health as well as a platform to book appointments with health care providers and buy contraceptives, STI tests, and other health products.

    Morenike Fajemisin, co-founder and CEO, told me she wanted to help young women take care of their health so they could stay in school and achieve their dreams. “As long as that woman or young person has access to a smartphone, she has a way to connect with Whispa Health through our app or any of our social media channels,” she said. “Thanks to the internet, she is a few clicks away from finding the shame-free and confidential health care that she needs.”

    We need more women entrepreneurs like Ms. Foe and Ms. Fajemisin to tackle some of the biggest challenges we are facing today, including climate change, pandemic surveillance, and democratic backsliding. Closing the digital gender divide in Africa is a crucial first step. It will open the innovation economy to millions of women and girls on the continent. It will give them–and through them, their children and communities–access to knowledge and quality education as well as health care, which in turn will further boost economic development, help build more resilient communities, and strengthen democracies.

    The ripple effects will be wide. As Ms. Fajemisin told me, “When girls hear about successful women who come from similar backgrounds or nationalities, they realize that such success is possible for them too.” (Or, as civil rights activist Marian Wright Edelman put it, “You can’t be what you don’t see.”)

    The Global North should not hesitate when it comes to investing in Africa’s digital infrastructure. The population of sub-Saharan Africa–about 1.2 billion people today–is set to almost double by 2050. And according to a study from the Brookings Institution, consumer spending in the continent is expected to rise to $2.5 trillion by 2030.

    More business and philanthropic leaders should answer Vice President Harris’ call to action and join in the effort to promote gender equality and digital access in Africa. We will all benefit.

    Michelle A. Williams is the Dean of Faculty at the Harvard T.H. Chan School of Public Health.

    The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

    More must-read commentary published by Fortune:

    [ad_2]

    Michelle A. Williams

    Source link

  • CEOs may not realize it, but they already know what to do about A.I.

    CEOs may not realize it, but they already know what to do about A.I.

    [ad_1]

    A.I. has arrived, and CEOs are asking what to do. The answer might surprise them: Do what you know best.

    It’s a safe bet that various forms of artificial intelligence, from algorithmic decision-support systems to machine learning applications, have already made their way into the front and back offices of most companies. Remarkably, generative A.I. is now demonstrating value in creative and imagination-driven tasks.

    We’ve seen this movie before. The Internet. Mobile. Social media. And now artificial intelligence. With each, the business has been confronted with a new technology that holds both great promise and considerable uncertainty, adopted seemingly overnight by consumers, students, professionals, and businesses.

    CEOs recognize the challenge. If they take a wait-and-see approach or simply clamp down on A.I. use, they risk missing a historic opportunity to supercharge their products, services, and operations. On the other hand, allowing the new technology to proliferate within their companies in uncoordinated, even haphazard, ways can lead not only to duplication and fragmentation, but to something much more serious: irresponsible uses of A.I., including the perpetuation of biases, amplification of misinformation, and inadvertent release of proprietary data.

    What to do? A.I. is evolving so rapidly that there is no definitive playbook. But most of today’s CEOs have learned valuable lessons from prior technology inflection points. We believe they are well-equipped to apply three basic lessons:

    Data governance must become data and A.I. governance

    Governance may sound to some like heavy-handed, top-down oversight. But this is not about choosing either centralization or decentralization. It’s about developing company-wide approaches and standards for critical enablers, from the technology architecture needed to support and scale A.I. workloads to the ways you ensure compliance with both regulation and your company’s core values. Without enterprise consistency, you won’t have a clear line of sight into your A.I. applications, and you can’t enable integration and scaling.

    You don’t have to start from scratch. Most companies have established data governance to ensure compliance with data privacy regulations, such as the EU’s GDPR. Now, data governance must become data and A.I. governance.

    A.I. applications and models throughout the company should be inventoried, mapped, and continuously monitored. Most urgently, enterprise standards for data quality should be defined and implemented, including data lineage and data provenance. This involves where, when, and how the data was collected or synthesized and who has the right to use it. Some A.I. systems may be “black boxes,” but the data sets selected to train and feed them are knowable and manageable–in particular for business applications.

    Employees don’t need to become data scientists–they need to become A.I.-literate

    History teaches us that when a technology becomes ubiquitous, virtually everyone’s job changes. Here’s an example: The first project of the Data & Trust Alliance–a consortium we co-chair that develops data and A.I. practices–targeted what some might consider unlikely parts of our companies, human resources and procurement.

    The Alliance developed algorithmic safety tools–safeguards to detect, mitigate and monitor bias in the algorithmic systems supplied by vendors for employment decisions.

    When the tools were introduced to HR and procurement professionals, they asked for education, not in how to be a data scientist, but how to be A.I.-literate HR and procurement professionals. We shared modules on how to evaluate the data used to train models, what types of bias testing to look for, how to assess model performance, and more.

    The lesson? Yes, we need data scientists and machine learning experts. But it’s time to enhance the data and A.I. literacy of our entire workforce.

    Set the right culture

    Many companies have adopted ethical A.I. principles, but we know that trust is earned by what we do, more than by what we say. We need to be transparent with consumers and employees about when they are interacting with an A.I. system. We need to ensure that our A.I. systems–especially for high-consequence applications–are explainable, remain under human control, and can withstand the highest levels of scrutiny, including the auditing required by new and proposed regulations. In short, we need to evolve our corporate cultures for the era of A.I.

    Another project by the Alliance was to create “new diligence” criteria to assess the value and risk inherent in targeting data–and A.I.-centric companies for investment or acquisition. The Alliance created Data Diligence and AI Diligence, but the greatest need was for Responsible Culture Diligence–ensuring that values, team composition, incentives, feedback loops, and decision rights support the new and unique requirements of A.I.-driven business. 

    CEOs have been here before. For some companies, it took decades and a pandemic to fully realize that “digital transformation” implicated every part of the company and its relationships with all stakeholders. And what were the results of misreading the Internet, mobile, and social? Disrupted business models and loss of competitiveness, as well as unintended consequences for society.

    What will be the result of getting this one wrong? We could miss a once-in-a-generation opportunity to achieve radical breakthroughs, solve intractable problems, delight customers, empower employees, reduce waste and errors, and serve society. Far worse, we risk doing harm to our stakeholders and to future generations.

    A.I. is not solely–indeed, not most importantly–a technology challenge. It is the next driver of enterprise transformation. It’s up to the CEO, board, and the entire C-suite to lead that. And the time to do so is now.

    Kenneth I. Chenault and Samuel J. Palmisano are founders and co-chairs of the Data & Trust Alliance, a not-for-profit organization whose 25 cross-industry members develop and adopt responsible data and AI practices. Members include CVS Health, General Catalyst, GM, Humana, Mastercard, Meta, Nike, Pfizer, the Smithsonian Institution, UPS, and Walmart. Chenault is the chairman and managing director of General Catalyst and the former chairman and CEO of American Express. Palmisano is the former chairman and CEO of IBM.

    The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

    More must-read commentary published by Fortune:

    [ad_2]

    Kenneth I. Chenault, Samuel J. Palmisano

    Source link

  • Meta releases ‘human-like’ AI image creation model

    Meta releases ‘human-like’ AI image creation model

    [ad_1]

    NEW YORK, June 13 (Reuters) – Meta Platforms (META.O) said on Tuesday that it would provide researchers with access to components of a new “human-like” artificial intelligence model that it said can analyze and complete unfinished images more accurately than existing models.

    The model, I-JEPA, uses background knowledge about the world to fill in missing pieces of images, rather than looking only at nearby pixels like other generative AI models, the company said.

    That approach incorporates the kind of human-like reasoning advocated by Meta’s top AI scientist Yann LeCun and helps the technology to avoid errors that are common to AI-generated images, like hands with extra fingers, it said.

    Meta, which owns Facebook and Instagram, is a prolific publisher of open-sourced AI research via its in-house research lab. Chief Executive Mark Zuckerberg has said that sharing models developed by Meta’s researchers can help the company by spurring innovation, spotting safety gaps and lowering costs.

    “For us, it’s way better if the industry standardizes on the basic tools that we’re using and therefore we can benefit from the improvements that others make,” he told investors in April.

    The company’s executives have dismissed warnings from others in the industry about the potential dangers of the technology, declining to sign a statement last month backed by top executives from OpenAI, DeepMind, Microsoft (MSFT.O) and Google (GOOGL.O) that equated its risks with pandemics and wars.

    Lecun, considered one of the “godfathers of AI,” has railed against “AI doomerism” and argued in favor of building safety checks into AI systems.

    Meta is also starting to incorporate generative AI features into its consumer products, like ad tools that can create image backgrounds and an Instagram product that can modify user photos, both based on text prompts.

    Reporting by Katie Paul; Editing by David Gregorio

    Our Standards: The Thomson Reuters Trust Principles.

    [ad_2]

    Source link

  • Facebook Whistleblower Frances Haugen Regrets Nothing

    Facebook Whistleblower Frances Haugen Regrets Nothing

    [ad_1]

    “I haven’t called my mom enough, that’s definitely true,” former Facebook product manager turned whistleblower Frances Haugen told me on a recent summer Friday. Other than that? She doesn’t have many big regrets. 

    Haugen is dialing in from a moving car: Such is life on the public-speaking circuit, where she has spent much of the last two years since going public as the former employee whose disclosures so tanked the credibility of the company in question that you now know it by its rebranded name, Meta. 

    But for old time’s sake—and to preserve the historicity of Facebook still being Facebook back in 2021, when Haugen shared tens of thousands of internal documents detailing the platform’s systemic toxicity-for-profit mindset—Haugen and I both avoid that shiny new name throughout our conversation about the ensuing years of post–“Facebook Files” fallout. If we’re getting picky about regrets, Haugen, who resides in Puerto Rico these days, does wish she could have convened a broader consortium of journalists sooner to unpack the documents for the world. Otherwise, she’s glad we’re all here now, at this point in time when the danger of unregulated social media is such a public concern that even the surgeon general is getting involved

    “We’re talking about a culture-change issue, right?” Haugen reminds me when I press for any conclusive sense of societal progress made. She’s thinking of this juncture now as our potential parallel to the seatbelts discourse of the 1960s—and how it took concerted effort, particularly from one individual, Ralph Nader (from whom Haugen clearly draws personal inspiration), to pressure corporate forces to implement safety measures we now take for granted. “Back in 1965, the average person did not know that we could live in a world where we set steadily improving standards for car safety,” Haugen says. 

    This is the central theme of her work via her nonprofit, Beyond the Screen, as well as her memoir, The Power of One, published this week by Little, Brown and Company that this could be our seatbelts moment, where we finally demand a basic degree of consumer safety from the increasingly opaque tech platforms bending our reality to their whims—and one day, we’ll look back and shake our heads and wonder how it couldn’t have been more obvious.

    In conversation with Vanity Fair, Haugen discusses how she feels about the past two years of tech oversight, why she’s not as nervous about the advent of AI as you might think, the one stipulation she has for maybe even returning to Facebook one day, and why she thinks it’s worth saving at all. 

    This interview has been edited and condensed for clarity. 

    Vanity Fair: How do you feel about the timing of your memoir? We’re at quite an interesting point in the public’s relationship with social media.

    Frances Haugen: I was blindsided last week when the Surgeon General issued the advisory around teen mental health and social media. Like, two years ago, I was just leaving Facebook. I doubt either you or I thought there was any chance this was in the near term future. It really symbolizes for me that we are seeing an interesting moment culturally. 

    One of the things that I think most people aren’t aware of is that the Surgeon General has issued very few advisories, maybe less than 15 in the last 60 years, about the things that we take for granted now. It’s things like, seatbelts save lives. Smoking causes cancer. Breastfeeding is good for babies. Real mom and apple pie kind of stuff. Those advisories act as the period at the end of a sentence. 

    I worry a little bit that we’re reaching an inflection point where we can pass sensible moderate laws, like the Digital Services Act in the European Union, or we can start passing emotional and extreme laws, like straight up banning TikTok. I really hope my book can play a role in shaping the conversation around what our options are. Is there a third way, you know, that’s not a Chinese approach, but also not the laissez faire approach that got us to where we are right now?

    [ad_2]

    Delia Cai

    Source link

  • Elon Musk and Other Leaders Are Worried About AI. Here’s Why | Entrepreneur

    Elon Musk and Other Leaders Are Worried About AI. Here’s Why | Entrepreneur

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    “The age of AI has begun,” Bill Gates declared this March, reflecting on an OpenAI demonstration of feats such as acing an AP Bio exam and giving a thoughtful, touching answer to being asked what it would do if it were the father of a sick child.

    At the same time, tech giants like Microsoft and Google have been locked in a race to develop AI tech, integrate it into their existing ecosystems and dominate the market. In February, Microsoft CEO Satya Nadella challenged Sundar Pichai of Google to “come out and dance” in the AI battlefield.

    For businesses, it’s a challenge to keep up. On the one hand, AI promises to streamline workflows, automate tedious tasks and increased overall productivity. Conversely, the AI sphere is fast-paced, with new tools constantly appearing. Where should they place their bets to stay ahead of the curve?

    And now, many tech experts are backpedaling. Leaders like Apple co-founder Steve Wozniak and Tesla’s Elon Musk, alongside 1,300 other industry experts, professors and AI luminaries, all signed an open letter calling to halt AI development for six months.

    At the same time, the “godfather of AI,” Geoffrey Hinton, resigned as one of Google’s lead AI researchers and wrote a New York Times op-ed warning of the technology he’d helped create.

    Even ChatGPT’s Sam Altman joined in the chorus of warning voices during a Congress hearing.

    But what are these warnings about? Why do tech experts say that AI could actually pose a threat to businesses — and even humanity?

    Here is a closer look at their warnings.

    Uncertain liability

    To begin with, there is a very business-focused concern. Liability.

    While AIs have developed amazing capabilities, they are far from faultless. ChatGPT, for instance, famously invented scientific references in a paper it helped write.

    Consequently, the question of liability arises. If a business uses AI to complete a task and gives a client erroneous information, who is liable for damages? The business? The AI provider?

    None of that is clear right now. And traditional business insurance fails to cover AI-related liabilities.

    Regulators and insurers are struggling to catch up. Only recently, the EU drafted a framework to regulate AI liability.

    Related: Rein in the AI Revolution Through the Power of Legal Liability

    Large-scale data theft

    Another concern is linked to unauthorized data use and cybersecurity threats. AI systems frequently store and handle large amounts of sensitive information, much of it collected in legal gray areas.

    This could make them attractive targets for cyberattacks.

    “In the absence of robust privacy regulations (US) or adequate, timely enforcement of existing laws (EU), businesses have a tendency to collect as much data as they possibly can,” explained Merve Hickok, Chair & Research Director at Center for AI and Digital Policy, in an interview with The Cyber Express.

    “AI systems tend to connect previously disparate datasets,” Hickok continued. “This means that data breaches can result in exposure of more granular data and can create even more serious harm.”

    Misinformation

    Next up, bad actors are turning to AI to generate misinformation. Not only can this have serious ramifications for political figures, especially with an election year looming. It can also cause direct damage to businesses.

    Whether targeted or accidental, misinformation is already rampant online. AI will likely drive up the volume and make it harder to spot.

    AI-generated photos of business leaders, audio mimicking a politician’s voice and artificial news anchors announcing convincing economic news. Business decisions triggered by such fake information could have disastrous consequences.

    Related: Pope Francis Didn’t Really Wear A White Puffer Coat. But It Won’t Be the Last Time You’re Fooled By an AI-Generated Image.

    Demotivated and less creative team members

    Entrepreneurs are also debating how AI will affect the psyche of individual members of the workforce.

    “Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” the open letter asks.

    According to Matt Cronin, the U.S. Department of Justice’s National Security & Cybercrime Coordinator, the answer is a clear “No.” Such a large-scale replacement would devastate the motivation and creativity of people in the workforce.

    “Mastering a domain and deeply understanding a topic takes significant time and effort,” he writes in The Hill. “For the first time in history, an entire generation can skip this process and still progress in school and work. However, reliance on generative AI comes with a hidden price. You are not truly learning — at least not in a way that meaningfully benefits you.”

    Ultimately, widespread AI use may lower team members’ competence, including critical thinking skills.

    Related: AI Can Replace (Some) Jobs — But It Can’t Replace Human Connection. Here’s Why.

    Economic and political instability

    What economic shifts widespread AI adoption will cause are unknown, but they will likely be large and fast. After all, a recent Goldman Sachs estimate projected that two-thirds of current occupations could be partially or fully automated, with opaque ramifications for individual businesses.

    According to experts’ more pessimistic outlooks, AI could also incite political instability. This could range from election tampering to truly apocalyptic scenarios.

    In an op-ed in Time Magazine, decision theorist Eliezer Yudkowsky called for a general halt to AI development. He and others argue that we are unprepared for powerful AIs and that unfettered development could lead to catastrophe.

    Conclusion

    AI tools hold immense potential to increase businesses’ productivity and level up their success.

    However, it’s crucial to be aware of the danger that AI systems pose, not just according to doomsayers and techno-skeptics, but according to the very same people who developed these technologies.

    That awareness will help infuse businesses’ AI approach with a caution critical to successful adaptation.

    [ad_2]

    Hasan Saleem

    Source link

  • Facebook faces new allegations of gender discrimination in its delivery of job ads. Research by human rights group suggests it’s a global concern

    Facebook faces new allegations of gender discrimination in its delivery of job ads. Research by human rights group suggests it’s a global concern

    [ad_1]

    Additional research shared exclusively with CNN by Global Witness suggests that this algorithmic bias is a global issue, the human rights group says.

    “Our concern is that Facebook is exacerbating the biases that we live with in society and actually marring opportunities for progress and equity in the workplace,” Naomi Hirst, who leads Global Witness’ campaign strategy on digital threats to democracy, told CNN.

    Global Witness previously filed complaints with the UK Equality and Human Rights Commission and Information Commissioner’s Office over similar discrimination concerns, which remain under investigation. At the time, Global Witness said a spokesperson for Meta (which was still called Facebook at the time) told the group that its “system takes into account different kinds of information to try and serve people ads they will be most interested in,” and that it was “exploring expanding limitations on targeting options for job, housing and credit ads to other regions beyond the US and Canada.”
    The European complaints also mirror a complaint filed with the US Equal Employment Opportunity Commission in December by women’s trucking organization Real Women in Trucking, alleging that Facebook discriminates based on age and gender when deciding which users to show job ads to. Meta declined to comment to CNN about the Real Women in Trucking complaint.

    Meta spokesperson Ashley Settle said in a statement that Meta applies “targeting restrictions to advertisers when setting up campaigns for employment, as well as housing and credit ads, and we offer transparency about these ads in our Ad Library.”

    “We do not allow advertisers to target these ads based on gender,” Settle said in the statement. “We continue to work with stakeholders and experts across academia, human rights groups and other disciplines on the best ways to study and address algorithmic fairness.”

    Meta did not comment specifically about the new complaints filed in Europe. The company also did not respond to a question asking in which countries it now limits targeting options for employment, housing and credit ads.

    Missing out on jobs because of your gender

    Facebook has faced various claims of discrimination, including in its delivery of job advertisements, over the past decade. In 2019, as part of an agreement to settle multiple lawsuits in the United States, the platform promised to make changes to prevent biased delivery of housing, credit and employment ads based on protected characteristics, such as gender and race.

    Efforts to address those disparities included removing the option for advertisers to target employment ads based on gender, but this latest research suggests that change is being undermined by Facebook’s own algorithm, according to the human rights groups.

    As a result, the groups say, countless users may be missing out on the opportunity to see open jobs they could be qualified for, simply because of their gender. They worry this could exacerbate historic workplace inequities and pay disparities.

    “You cannot escape big tech anymore, it’s here to stay and we have to see how it impacts women’s rights and the rights of minority groups,” said Linde Bryk, head of strategic litigation at Bureau Clara Wichmann. “It’s too easy, as a corporation, to just hide behind the algorithm, but if you put something on the market … you should also be able to control it.”

    Global Witness conducted additional experiments in four other countries — including India, South Africa and Ireland — and says the research shows that the algorithm perpetuated similar biases around the world.

    With more than 2 billion daily active users around the world, Facebook can be a key source for helping users find job openings.
    The platform’s business model relies on its algorithm’s careful targeting of advertisements to the users it thinks are most likely to click on them — so that ad buyers see returns from their spending on the platform. But Global Witness’ research suggests that this results in job ads being targeted to users based on gender stereotypes. And in some cases, human rights advocates say, the biases that appear to be shown by Facebook’s ad system may exacerbate other disparities.

    In France, for example, Facebook is often used for job searches by people of lower income levels, meaning the people most affected by its alleged algorithmic biases may be those already in marginalized positions, said Caroline Leroy-Blanvillain, lawyer and member of the legal force steering committee at Fondation des Femme.

    Pat de Brún, head of Amnesty International’s big tech accountability team, said he was not necessarily surprised by the findings of Global Witness’ research. “Research consistently shows how Facebook’s algorithms deliver deeply unequal outcomes and often reinforce marginalization and discrimination,” de Brún told CNN. “And what we see is the reproduction and amplification of some of the worst aspects of society.”

    “We have this illusion of neutrality that the algorithms can provide, but actually they’re very often reproducing those biases and often obscuring the biases and making them more difficult to challenge,” he said.

    Gendered targeting

    To conduct the experiments cited in the complaints, Global Witness ran a series of job ads in France and the Netherlands over two-day periods between February and April. The advertisements linked to real job postings found on employment websites, and researchers selected positions — including preschool teacher, psychologist, pilot and mechanic — traditionally associated with gender stereotypes.

    Global Witness targeted the ads to adult Facebook users of any gender who resided in, or had recently visited, the chosen countries. The researchers requested that the ads “maximize the number of link clicks,” but otherwise left it up to Facebook’s algorithm to determine who ultimately saw the advertisements.

    The ads were often shown to users along heavily gendered lines, according to an analysis of the data provided by Facebook’s ad manager platform.

    “Just because advertisers can’t select it, doesn’t mean that the ‘gender’ [category] doesn’t weigh in the process of showing ads at all,” one of the Netherlands complaints states.

    In France, for example, 93% of the users shown a preschool teacher job ad and 86% of those shown a psychologist job ad were women, while women comprised just 25% of users shown a pilot job ad and 6% of those shown a mechanic job ad, according to Facebook’s ad manager platform.

    Similarly, in the Netherlands, 85% of the users shown a teacher job ad and 96% of those shown a receptionist job ad were women, while just 4% of those shown a job ad for a mechanic were women, according to Facebook’s data. Certain roles were less strongly skewed — a package delivery job ad, for example, was shown to 38% women users in the Netherlands.

    The results mirrored those Global Witness has found in the United Kingdom, where women were more often shown ads for nursery teacher and psychologist jobs, and men were overwhelmingly shown ads for pilot and mechanic positions.

    In some cases, the degree of gender imbalance in how users were targeted for certain jobs varied by country — in India, just 39% of the users shown a psychologist job ad were women, while in Europe and South Africa, women were more likely than men to be shown psychologist job ads. A further exception was pilot ads shown in South Africa, which were more balanced, with 45% of users shown a pilot ad being women.

    Global Witness also ran tests in Indonesia, but Facebook’s ad manager was unable to identify the genders of many of the users who were shown the advertisements, making it difficult to conduct a robust analysis of the results there.

    “Even though Facebook may have become less fashionable in certain countries, it remains the key communications platform for much of the world … as the public square where public discourse happens,” Amnesty International’s de Brún said. “They should be ensuring these discriminatory outcomes do not happen, intentionally or not.”

    Because little information is publicly available about how Facebook’s algorithm works, the complaints acknowledge that the cause of the gender skew was not exactly clear. One of the Netherlands complaints speculates about whether the algorithm may have been trained on “contaminated” data such as outdated information about which genders typically hold which roles.

    Meta did not respond to questions from CNN about how the algorithm that runs its ad system is trained. In a 2020 blog post about its ad delivery system, Facebook said ads are shown to users based on a variety of factors, including “behavior on and off” the platform. Earlier this year, Facebook launched a “variance reduction system” — a new machine learning technology — to “advance equitable distribution” of housing ads in the United States, and said it planned to expand the system to US employment and credit ads.

    Seeking algorithmic transparency

    From November 2016 to September 2018, Facebook was hit with five discrimination lawsuits and charges from US civil rights and labor organizations, workers and individuals, alleging that the company’s ad systems excluded certain people from seeing housing, employment and credit ads based on their age, gender or race.
    The legal actions followed a slew of critical coverage of Facebook’s advertising systems, including one 2018 ProPublica investigation that found Facebook was facilitating the spread of discriminatory advertisements by allowing employers using its platform to target users of only one sex with job ads. Some companies were targeting only men with ads for trucking or police jobs, for example, while others targeted only women with ads for nursing or medical assistant jobs, according to the report. (A Facebook spokesperson said in a statement responding to the report at the time that discrimination is “strictly prohibited in its policies” and that it would “defend our practices.”)
    In March 2019, Facebook agreed to pay nearly $5 million to settle the lawsuits. The company also said it would launch a different advertising portal for housing, employment and credit ads on Facebook, Instagram and Messenger offering fewer targeting options.
    “There is a long history of discrimination in the areas of housing, employment and credit, and this harmful behavior should not happen through Facebook ads,” then-Facebook COO Sheryl Sandberg said in a blog post at the time of the settlement. Sandberg added that the company had engaged a civil rights firm to review its ad tools and help it understand how to “guard against misuse.”
    Later that year, the US Equal Employment Opportunity Commission ruled that seven employers who bought Facebook ads targeting workers of only certain ages or genders had violated federal law.
    Logo Systems Error
    Systems Error

    How do I…


    In addition to restricting advertisers from targeting employment, housing and credit ads based on gender, Facebook also prohibits targeting based on age and requires that location targeting have a minimum radius of 25 kilometers (or about 15.5 miles), the company says. For all advertisements on its platform, Facebook in 2022 removed targeting options based on sensitive characteristics, such as religious practices or sexual orientation. The company also requires advertisers to comply with its non-discrimination policy, and makes all ads available for anyone to view in its Ad Library.
    Still, researchers have continued to find evidence that Facebook’s delivery of job advertisements may be discriminatory, including a study out of the University of Southern California published in 2021.
    In December, Real Women in Trucking filed its EEOC complaint alleging that Facebook’s job ads algorithm discriminates based on age and gender. “Men receive the lion’s share of ads for blue-collar jobs, especially jobs in industries that have historically excluded women,” the complaint states, while “women receive a disproportionate share of ads for lower-paid jobs in social services, food services, education, and health care.”

    “People don’t look for jobs or housing in newspapers, or even the radio, anymore, they go online, that’s where all information flows for economic opportunities,” said Peter Romer-Friedman, one of the attorneys representing Real Women in Trucking. “If you’re not part of the group that’s receiving the information, you lose out on the opportunity to hear about and pursue that job.”

    Opinion: An internet that women want? It looks like this

    Romer-Friedman was also on the negotiating team that worked on the 2019 settlement agreement with Facebook. At the time, he said, he and others raised concerns that while Facebook’s promised changes were a step in the right direction, the same bias issues could be replicated by the platform’s algorithm.

    Meta declined to comment on the EEOC complaint from Real Women in Trucking; filings in cases with the agency are not publicly available.

    The French and Dutch agencies will have discretion about whether to take up the investigations requested in the latest complaints. Global Witness and its partners say they hope that potential decisions by the human rights agencies on their findings could put pressure on Meta to improve its algorithm, increase transparency and prevent further discrimination. Meta could ultimately face significant fines if the countries’ data protection agencies decide to investigate the issue and ultimately find the company to have violated the EU’s General Data Protection Regulation, which prohibits discriminatory use of user data.

    “What we’re hoping with these complaints is that it forces [Facebook] to the table to crack open the black box of their algorithm, to explain how they can correct what appears to be … discrimination by their algorithm,” Global Witness’ Hirst said. “I think we know enough about gendered workforces and gendered jobs to say that Facebook is adding to the problem.”

    —-

    Credits

    Commissioning Editor: Meera Senthilingam

    Editor: Seth Fiegerman

    Data and Graphics Editor: Carlotta Dotto

    Illustrations: Carolina Moscoso for CNN

    Visual Editors: Tal Yellin, Damian Prado, David Blood and Gabrielle Smith

    [ad_2]

    Source link

  • Intuit CEO: How Company Avoided Mass Layoffs, ‘Fake Work’ | Entrepreneur

    Intuit CEO: How Company Avoided Mass Layoffs, ‘Fake Work’ | Entrepreneur

    [ad_1]

    This article originally appeared on Business Insider.

    Mass layoffs through 2022 and 2023 are down to companies and CEOs miscalculating the long-term impact of the pandemic, according to Sasan Goodarzi, chief executive of software giant Intuit.

    Companies had made the incorrect assumption that COVID-19 had brought about structural changes, rather than one-off, events-based changes, Goodarzi told Insider in an interview.

    Intuit, which owns a portfolio of software products including email-marketing service Mailchimp, tax-filing software TurboTax, and credit service CreditKarma, had 17,300 employees, as of July last year, according to financial filings, up from 13,500 the prior year. A spokeswoman confirmed to Insider that the company has not conducted mass layoffs.

    “When you see ads going through the roof, payments volume — that’s just two examples — some companies assume that is a structural change that will never pull back,” he said. “They then hired in sales, data analytics, engineering to support that growth into perpetuity.”

    Now, companies that grew in the pandemic are seeing a slowdown. “They don’t need all that cost structure, that factually I do see,” he added.

    During the first months of the pandemic, internet traffic surged as much as 60% in some countries, according to an OECD analysis. That translated to big boosts to digital companies’ bottom lines.

    Amazon grew employees 138% between 2018 and 2022, per analysis by Insider, and experienced record profits during the pandemic. Meta grew its employees by 143% over the same period, and Alphabet by 93%.

    These firms are now aggressively cutting jobs.

    Amazon is axing 27,000 jobs. Meta is set to cut 21,000 staff, with CEO Mark Zuckerberg admitting in a memo he had wrongly assumed that the surge in online activity during the pandemic would mean a “permanent acceleration” for Meta’s business.

    “I got this wrong, and I take responsibility for that,” Zuckerberg wrote last November.

    It wasn’t ‘fake work’

    Goodarzi disputed one characterization of mass layoffs by his fellow tech CEOs: That they were down to some people doing “fake work.”

    “I’m not sure any companies hired a bunch of people to do fake work,” Goodarzi said, adding that this was “a real reach.”

    The term “fake work” went mainstream in March after venture capitalist Keith Rabois suggested that Google and Facebook had spent years intentionally overhiring staff to bolster their own headcount and prevent engineers from going to rival firms. The cuts, he argued, were an inevitable corollary of the bloat. In May, Elon Musk claimed that Twitter employed “a lot of people doing things that didn’t seem to have a lot of value” prior to his drastic job cuts.

    “There’s nothing for these people to do — they’re really — it’s all fake work,” Rabois said at the time. “Now that’s being exposed, what do these people actually do, they go to meetings.”

    However, Goodarzi told Insider that mass layoffs had in fact unnerved the remaining star talent at major tech firms, particularly in AI.

    Hiring, he said, had “actually become easier because of all the tech layoffs, because of the uncertainty the layoffs have caused.” He added: “It’s getting people to raise their heads who wouldn’t.”

    [ad_2]

    Shona Ghosh

    Source link

  • Live updates: Apple unveils new products at WWDC 2023 event

    Live updates: Apple unveils new products at WWDC 2023 event

    [ad_1]

    From Apple

    The company showed off a new mixed reality headset called Apple Vision Pro, in what promises to be its biggest and riskiest new hardware launch in years.

    It will cost $3,499.

    Apple CEO Tim Cook said the device, which blends virtual reality and augmented reality, is “the first product you look through, not at.”

    Augmented reality is a technology that allows users to overlay virtual images on live video of the real world.

    “It looks familiar but it’s entirely new … just like it’s in your physical space, using natural intuitive tools like your hands, face and voice,” he said.

    According to Apple, once a user puts on the device, they’re able to see apps directly projected in front of them. The interface is designed to look “truly present” in your room, responding to light and casting shadows to help users understand scale and distance.

    “It’s easy to make apps any scale .. anywhere in your space that feels natural,” an Apple executive said at the event. “It’s just you and your content … it feels like magic.”

    The device responds to a users’ hands and eyes “as if your mind if guiding the experience.” But it even works if hands are in a lap.

    Vision Pro, which features a custom R1 processor, will run on VisionOS, allowing developers to reimagine existing apps or create new experiences and worlds for the device. Users can

    Apple said it previewed Vision Pro to a subset of developers ahead of the event — some of whom created experiences ranging from virtually seeing how the human heart works to support for Microsoft Office. Users can unlock the Vision Pro with their iris via Optic ID (think FaceID for the eyes).

    The company said it filed 5,000 patents during the development of the device.

    [ad_2]

    Source link

  • Reimagining Education in the Age of Technology | Entrepreneur

    Reimagining Education in the Age of Technology | Entrepreneur

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    There’s no doubt that the age of technology has transformed various sectors of society, but its impact on education is particularly profound. We’re now at a point where we must reassess our traditional notions of education and begin to reimagine it in the light of technological advancements.

    In the traditional classroom, education has long been a one-size-fits-all affair. With a single teacher facing a room full of students, the pace of teaching is often dictated by the average student’s ability. This approach leaves little room for individual attention, which can lead to students at both ends of the spectrum — the struggling and the gifted — feeling underserved.

    Related: How Will Technology Transform Global Education In 2023?

    The benefits of technology in education

    The advent of technology, however, opens up a world of possibilities for personalized, adaptive learning. Educational platforms are now harnessing artificial intelligence (AI) to create learning environments that adapt to the needs of each student. Lessons can be presented in an array of formats, from text and graphics to videos and interactive simulations, catering to different learning styles. With real-time feedback, these platforms can adjust the level of difficulty, the pace of lessons and the types of exercises to fit each student’s unique learning curve. This individualized approach could address the challenges of the traditional classroom, offering a more efficient and inclusive education.

    Further, the connectivity offered by the internet has made knowledge more accessible than ever. It’s not just about connecting to a vast amount of information, it’s also about connecting to people. Platforms like Coursera and edX have democratized education, enabling anyone with an internet connection to access courses from prestigious universities worldwide. Online communities and discussion forums have turned learning into a collaborative, interactive experience, not confined by geographical boundaries.

    But as we embrace the benefits of technology in education, it’s equally important to remain aware of the challenges that lie ahead.

    The challenges ahead

    First, there’s the issue of digital divide. Not every student has access to the technology required for digital learning. Even when the devices are available, reliable internet connections may not be, especially in rural and low-income areas. It’s crucial that we address this disparity and ensure that the benefits of technology-aided education are equitably distributed.

    Second, while technology offers personalized learning, there’s a risk of isolating students. Traditional classrooms foster social interaction and teamwork — vital skills for the real world. Therefore, it’s essential that the design of digital learning environments incorporates features that promote collaboration and interaction.

    Third, the privacy and security of students’ data is a significant concern. As more of our children’s education takes place online, it’s paramount that platforms adhere to strict data privacy standards to protect students’ sensitive information.

    Finally, there’s a concern about the readiness of our educators. Teachers need to be equipped with the skills and knowledge to use these technologies effectively. They need to transition from being knowledge dispensers to learning facilitators, a shift that requires significant training and support.

    Related: How This Startup Is Infusing Technology with Education in Rural Schools

    What to keep in mind going forward

    In conclusion, there’s no denying that technology has immense potential to revolutionize education. It promises personalized, accessible and collaborative learning that could address many of the flaws of our current system. However, as we chart the path for this new era of education, it’s essential that we do so thoughtfully.

    We need to ensure that the benefits of technology in education reach every student, regardless of their socioeconomic status. We must incorporate social interactions and collaborations in the digital learning environment to prepare students for the real world. We need to prioritize the security and privacy of students’ data. And, most importantly, we must equip our teachers with the skills and support they need to navigate this new terrain.

    The journey to reimagine education in the age of technology is complex and fraught with challenges. However, if we approach it thoughtfully and inclusively, we have the opportunity to create an education system that truly serves every student’s unique needs and prepares them for the future. We have the opportunity to democratize knowledge, ensuring that learning is not a privilege for the few but a right for all.

    Moreover, the successful integration of technology into education has broader implications for society. It could foster a culture of lifelong learning, where individuals continuously upgrade their skills to stay relevant in the fast-paced world. In a future where AI and automation are set to disrupt job markets, such a culture is not just desirable but necessary.

    Furthermore, a more educated populace could drive innovation, economic growth and social progress. Imagine the solutions we could create if more minds had access to quality education and the tools to apply that knowledge. Imagine the societal problems we could solve if critical thinking and problem-solving were ingrained in our education system.

    Related: 3 Challenges of Education that Ed-tech is Addressing

    So, let’s not shy away from the challenges of integrating technology into education. Let’s see them as opportunities to refine and improve the system. Let’s learn from the successes and failures of early adopters and strive to create a digital learning environment that is inclusive, engaging, secure and effective.

    At the end of the day, education is not just about imparting knowledge; it’s about empowering individuals. It’s about fostering curiosity, creativity and empathy. It’s about equipping our youth with the skills and mindset they need to navigate the future. Technology can aid in this endeavor, but only if we use it thoughtfully, responsibly and inclusively.

    In this age of technology, let’s not merely digitize education. Let’s reimagine it. For the potential rewards — a more educated, innovative and inclusive society — are well worth the effort.

    [ad_2]

    Aidan Sowa

    Source link

  • Exclusive: Chinese hackers attacked Kenyan government as debt strains grew

    Exclusive: Chinese hackers attacked Kenyan government as debt strains grew

    [ad_1]

    • Cyber spies infiltrated Kenyan networks from 2019
    • Hit finance ministry, president’s office, spy agency and others
    • Sources believe Beijing was seeking info on debt

    NAIROBI, May 24 (Reuters) – Chinese hackers targeted Kenya’s government in a widespread, years-long series of digital intrusions against key ministries and state institutions, according to three sources, cybersecurity research reports and Reuters’ own analysis of technical data related to the hackings.

    Two of the sources assessed the hacks to be aimed, at least in part, at gaining information on debt owed to Beijing by the East African nation: Kenya is a strategic link in the Belt and Road Initiative – President Xi Jinping’s plan for a global infrastructure network.

    “Further compromises may occur as the requirement for understanding upcoming repayment strategies becomes needed,” a July 2021 research report written by a defence contractor for private clients stated.

    China’s foreign ministry said it was “not aware” of any such hacking, while China’s embassy in Britain called the accusations “baseless”, adding that Beijing opposes and combats “cyberattacks and theft in all their forms.”

    China’s influence in Africa has grown rapidly over the past two decades. But, like several African nations, Kenya’s finances are being strained by the growing cost of servicing external debt – much of it owed to China.

    The hacking campaign demonstrates China’s willingness to leverage its espionage capabilities to monitor and protect economic and strategic interests abroad, two of the sources said.

    The hacks constitute a three-year campaign that targeted eight of Kenya’s ministries and government departments, including the presidential office, according to an intelligence analyst in the region. The analyst also shared with Reuters research documents that included the timeline of attacks, the targets, and provided some technical data relating to the compromise of a server used exclusively by Kenya’s main spy agency.

    A Kenyan cybersecurity expert described similar hacking activity against the foreign and finance ministries. All three of the sources asked not to be named due to the sensitive nature of their work.

    “Your allegation of hacking attempts by Chinese Government entities is not unique,” Kenya’s presidential office said, adding the government had been targeted by “frequent infiltration attempts” from Chinese, American and European hackers.

    “As far as we are concerned, none of the attempts were successful,” it said.

    It did not provide further details nor respond to follow-up questions.

    A spokesperson for the Chinese embassy in Britain said China is against “irresponsible moves that use topics like cybersecurity to sow discord in the relations between China and other developing countries”.

    “China attaches great importance to Africa’s debt issue and works intensively to help Africa cope with it,” the spokesperson added.

    THE HACKS

    Between 2000 and 2020, China committed nearly $160 billion in loans to African countries, according to a comprehensive database on Chinese lending hosted by Boston University, much of it for large-scale infrastructure projects.

    Kenya used over $9 billion in Chinese loans to fund an aggressive push to build or upgrade railways, ports and highways.

    Beijing became the country’s largest bilateral creditor and gained a firm foothold in the most important East African consumer market and a vital logistical hub on Africa’s Indian Ocean coast.

    By late 2019, however, when the Kenyan cybersecurity expert told Reuters he was brought in by Kenyan authorities to assess a hack of a government-wide network, Chinese lending was drying up. And Kenya’s financial strains were showing.

    The breach reviewed by the Kenyan cybersecurity expert and attributed to China began with a “spearphishing” attack at the end of that same year, when a Kenyan government employee unknowingly downloaded an infected document, allowing hackers to infiltrate the network and access other agencies.

    “A lot of documents from the ministry of foreign affairs were stolen and from the finance department as well. The attacks appeared focused on the debt situation,” the Kenyan cybersecurity expert said.

    Another source – the intelligence analyst working in the region – said Chinese hackers carried out a far-reaching campaign against Kenya that began in late 2019 and continued until at least 2022.

    According to documents provided by the analyst, Chinese cyber spies subjected the office of Kenya’s president, its defence, information, health, land and interior ministries, its counter-terrorism centre and other institutions to persistent and prolonged hacking activity.

    The affected government departments did not respond to requests for comment, declined to be interviewed or were unreachable.

    By 2021, global economic fallout from the COVID-19 pandemic had already helped push one major Chinese borrower – Zambia – to default on its external debt. Kenya managed to secure a temporary debt repayment moratorium from China.

    In early July 2021, the cybersecurity research reports shared by the intelligence analyst in the region detailed how the hackers secretly accessed an email server used by Kenya’s National Intelligence Service (NIS).

    Reuters was able to confirm that the victim’s IP address belonged to the NIS. The incident was also covered in a report from the private defence contractor reviewed by Reuters.

    Reuters could not determine what information was taken during the hacks or conclusively establish the motive for the attacks. But the defence contractor’s report said the NIS breach was possibly aimed at gleaning information on how Kenya planned to manage its debt payments.

    “Kenya is currently feeling the pressure of these debt burdens…as many of the projects financed by Chinese loans are not generating enough income to pay for themselves yet,” the report stated.

    A Reuters review of internet logs delineating the Chinese digital espionage activity showed that a server controlled by the Chinese hackers also accessed a shared Kenyan government webmail service more recently from December 2022 until February this year.

    Chinese officials declined to comment on this recent breach, and the Kenyan authorities did not respond to a question about it.

    ‘BACKDOOR DIPLOMACY’

    The defence contractor, pointing to identical tools and techniques used in other hacking campaigns, identified a Chinese state-linked hacking team as having carried out the attack on Kenya’s intelligence agency.

    The group is known as “BackdoorDiplomacy” in the cybersecurity research community, because of its record of trying to further the objectives of Chinese diplomatic strategy.

    According to Slovakia-based cybersecurity firm ESET, BackdoorDiplomacy re-uses malicious software against its victims to gain access to their networks, making it possible to track their activities.

    Provided by Reuters with the IP address of the NIS hackers, Palo Alto Networks, a U.S. cybersecurity firm that tracks BackdoorDiplomacy’s activities, confirmed that it belongs to the group, adding that its prior analysis shows the group is sponsored by the Chinese state.

    Cybersecurity researchers have documented BackdoorDiplomacy hacks targeting governments and institutions in a number of countries in Asia and Europe.

    Incursions into the Middle East and Africa appear less common, making the focus and scale of its hacking activities in Kenya particularly noteworthy, the defence contractor’s report said.

    “This angle is clearly a priority for the group.”

    China’s embassy in Britain rejected any involvement in the Kenya hackings, and did not directly address questions about the government’s relationship with BackdoorDiplomacy.

    “China is a main victim of cyber theft and attacks and a staunch defender of cybersecurity,” a spokesperson said.

    Reporting by Aaron Ross in Nairobi, James Pearson in London and Christopher Bing in Washington
    Additional reporting by Eduardo Baptista in Beijing
    Editing by Chris Sanders and Joe Bavier

    Our Standards: The Thomson Reuters Trust Principles.

    Aaron Ross

    Thomson Reuters

    West & Central Africa correspondent investigating human rights abuses, conflict and corruption as well as regional commodities production, epidemic diseases and the environment, previously based in Kinshasa, Abidjan and Cairo.

    James Pearson

    Thomson Reuters

    Reports on hacks, leaks and digital espionage in Europe. Ten years at Reuters with previous postings in Hanoi as Bureau Chief and Seoul as Korea Correspondent. Author of ‘North Korea Confidential’, a book about daily life in North Korea. Contact: 447927347451

    Christopher Bing

    Thomson Reuters

    Award-winning reporter covering the intersection between technology and national security with a focus on how the evolving cybersecurity landscape affects government and business.

    [ad_2]

    Source link

  • Silicon Valley is knowingly violating A.I. ethical principles. Society can’t respond if we let disagreements poison the debate

    Silicon Valley is knowingly violating A.I. ethical principles. Society can’t respond if we let disagreements poison the debate

    [ad_1]

    With criticism of ChatGPT much in the news, we are also increasingly hearing about disagreements among thinkers who are critical of A.I. While debating about such an important issue is natural and expected, we can’t allow differences to paralyze our very ability to make progress on A.I. ethics at this pivotal time. Today, I fear that those who should be natural allies across the tech/business, policy, and academic communities are instead increasingly at each other’s throats. When the field of A.I. ethics appears divided, it becomes easier for vested interests to brush aside ethical considerations altogether.

    Such disagreements need to be understood in the context of how we reached the current moment of excitement around the rapid advances in large language models and other forms of generative A.I.

    OpenAI, the company behind ChatGPT, was initially set up as a non-profit amid much fanfare about a mission to solve the A.I. safety problem. However, as it became clear that OpenAI’s work on large language models was lucrative, OpenAI pivoted to become a public company. It deployed ChatGPT and partnered with Microsoft–which has consistently sought to depict itself as the tech corporation most concerned about ethics.

    Both companies knew that ChatGPT violates, for example, the globally endorsed UNESCO AI ethical principles. OpenAI even refused to publicly release a previous version of GPT, citing worry about much the same kinds of potential for misuse we are now witnessing. But for OpenAI and Microsoft, the temptation to win the corporate race trumped ethical considerations. This has nurtured a degree of cynicism about relying on corporate self-governance or even governments to put in place necessary safeguards.

    We should not be too cynical about the leadership of these two companies, which are trapped between their fiduciary responsibility to shareholders and a genuine desire to do the right thing. They remain people of good intent, as are all raising concerns about the trajectory of A.I.

    This tension is perhaps best exemplified in a recent tweet by U.S. Senator Chris Murphy (D-CT) and the response by the A.I. community. In discussing ChatGPT, Murphy tweeted: “Something is coming. We aren’t ready.” And that’s when the A.I. researchers and ethicists piled on. They proceeded to criticize the Senator for not understanding the technology, indulging in futuristic hype, and focusing attention on the wrong issues. Murphy hit back at one critic: “I think the effect of her comments is very clear, to try to stop people like me from engaging in conversation, because she’s smarter and people like her are smarter than the rest of us.”

    I am saddened by disputes such as these. The concerns that Murphy raised are valid, and we need political leaders who are engaged in developing legal safeguards. His critic, however, is not wrong in questioning whether we are focusing attention on the right issues.

    To help us understand the different priorities of the various critics and, hopefully, move beyond these potentially damaging divisions, I want to propose a taxonomy for the plethora of ethical concerns raised about the development of A.I. I see three main baskets: 

    The first basket has to do with social justice, fairness, and human rights. For example, it is now well understood that algorithms can exacerbate racial, gender, and other forms of bias when they are trained on data that embodies those biases.

    The second basket is existential: Some in the A.I. development community are concerned that they are creating a technology that might threaten human existence. A 2022 poll of A.I. experts found that half expect A.I. to grow exponentially smarter than humans by 2059, and recent advances have prompted some to bring their estimates forward.

    The third basket relates to concerns about placing A.I. models in decision-making roles. Two technologies have provided focal points for this discussion: self-driving vehicles and lethal autonomous weapons systems. However, similar concerns arise as A.I. software modules become increasingly embedded in control systems in every facet of human life.

    Cutting across all these baskets is the potential misuse of A.I., such as spreading disinformation for political and economic gain, and the two-century-old concern about technological unemployment. While the history of economic progress has primarily involved machines replacing physical labor, A.I. applications can replace intellectual labor.

    I am sympathetic to all these concerns, though I have tended to be a friendly skeptic towards the more futuristic worries in the second basket. As with the above example of Senator Murphy’s tweet, disagreements among A.I. critics are often rooted in the fear that existential arguments will distract from addressing pressing issues about social justice and control.

    Moving forward, individuals will need to judge for themselves who they believe to be genuinely invested in addressing the ethical concerns of A.I. However, we cannot allow healthy skepticism and debate to devolve into a witch hunt among would-be allies and partners.

    Those within the A.I. community need to remember that what brings us together is more important than differences in emphasis that set us apart.

    This moment is far too important.

    Wendell Wallach is a Carnegie-Uehiro Fellow at Carnegie Council for Ethics in International Affairs, where co-directs the Artificial Intelligence & Equality Initiative (AIEI). He is Emeritus Chair of the Technology and Ethics study group at the Yale University Interdisciplinary Center for Bioethics.

    The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

    More must-read commentary published by Fortune:

    [ad_2]

    Wendell Wallach

    Source link

  • Why People Fear Generative AI — and What to Do About It | Entrepreneur

    Why People Fear Generative AI — and What to Do About It | Entrepreneur

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    People are scared of generative AI, but the future is safe and bright if you prepare now.

    I recently published an expert roundup on the benefits of generative AI. Some people worried about bias and political agendas, while others thought jobs would disappear and technocrats would hoard all wealth. Fortunately, we can mitigate risks through transparency, corporate governance and educational transformation.

    Below, I’ll discuss the fears and dangers of generative AI and potential solutions for each:

    Biased algorithms can shape public opinion

    Bias is inherent in every system. Editors have always selected stories to publish or ignore. With the advent of the internet, search engines rewarded publishers for optimized content and advertising, empowering a class of search engine marketers. Then, social media platforms developed subjective quality standards and terms of service. Additionally, bias can arise from algorithm training with disproportionate demographic representation. As such, we’ll face the same problems, solutions and debates over safety and privacy with generative AI that we already face in other systems.

    Some people believe in legislative solutions, but those are influenced by lobbyists and ideologues. Instead, consider competition among ChatGPT, Bard, Llama and other generative AIs. Competition sparks innovation, where profits and market share drive unique approaches. As demand increases, the job market will explode with demand for algorithm bias auditors, similar to the growth of diversity training in human resources.

    It’s challenging to find the source of bias in a black-box algorithm, where users only see the inputs and outputs of the system. However, open-source code bases and training sets will enable users to test for bias in the public space. Coders may develop transparent white-box models, and the market will decide a winner.

    Related: The 3 Principals of Building Anti-Bias AI

    Generative AI could destroy jobs and concentrate wealth

    Many people fear that elite technocrats will replace workers with robots and accumulate wealth while society suffers. Consider how technology replaced jobs for decades. The cotton gin replaced field workers who toiled in the hot sun. Movable type replaced scribes who hand-wrote books, and ecommerce websites displaced many physical stores.

    Some workers and businesses suffered from these transformations. But people learned new skills, and employers hired them to fill talent gaps. We will need radically different education and training to survive. Some people won’t upskill in time, and we have an existing social safety net for them.

    Historically, we valued execution over ideas. Today, ideation may set humans apart from machines, where “ideators” replace knowledge workers. Our post-AI world will require critical thinkers, creatives and others to innovate and define ideas for AIs to execute. Quality assurance professionals, algorithm trainers and “prompt engineers” will have a vibrant future, too.

    There will also be a market for “human-made” products and services. People will hunger for a uniquely human touch informed by emotional intelligence, especially in the medical and hospitality industries. An episode of 60 Minutes ended with “100% human-generated content,” and others will follow.

    Generative AI may create an influx of spam

    Many marketers saw ChatGPT as a shortcut to content creation, publishing articles verbatim. The risky technique is just a cheap, fast, low-quality form of ghostwriting.

    In contrast, generated content may make digital marketing more equitable by reducing ghostwriting costs for bootstrapped entrepreneurs. The key is understanding Google E-E-A-T, which stands for Experience, Expertise, Authoritativeness and Trustworthiness. Your Google reputation and ranking hinge on your published work. So, people who improve and customize generated content will prosper, while Google flags purveyors of “copy-paste” as spammers.

    Rogue AI could pose cybersecurity risks

    A rogue coder could create harmful directives for an AI to damage individuals, software, hardware and organizations. Threats include malware, phishing schemes and other cybersecurity threats. But that’s already happening. Before the internet, we battled computer viruses targeting people, organizations and equipment. For-profit antivirus providers have served this market need to keep us safer.

    Zero-trust platforms like blockchain may detect anomalies and mitigate cybersecurity risks. In addition, companies will create standard operating procedures (SOPs) to protect their systems — and profits. Therefore, new jobs will materialize to develop new processes, governance, ethics and software.

    Related: Why Are So Many Companies Afraid of Generative AI?

    Stolen identities and reputation attacks could be imminent

    People already create deepfake videos of celebrities and politicians. Many are parodies, but some are malicious. Soon, humans will be unable to detect them. Historically, we’ve had this capability since PhotoShop was released, and teams are already in place to address misinformation and fake images at social media companies and news outlets.

    Regulations and policing will never prevent the creation of fake content. Nefarious characters will find tools on the black market and the dark web. Fortunately, there are solutions in the private sector already.

    Social media platforms will continue to block presumably fake content and stolen identities. And more solutions will come to fruition. Tools can already detect generated content and continue to improve. Some may become integrated with internet browsers that start issuing fake content warnings. Or celebrities may wear timestamped, dynamic QR codes for authentication when filming.

    The singularity may finally arrive

    The thought of a conscious AI megalomaniac crosses sci-fi geek minds everywhere. Find comfort knowing that it may already exist. After all, we can’t detect biological or technological consciousness. Yet, consciousness may emerge from complex systems like generative AI. Indeed, the simulation hypothesis suggests we’re in a simulation that an AI controls already.

    Related: Addressing the Undercurrent of Fear Towards AI in the Workforce

    History is full of dangerous technology. Warren Buffet compared AI to the atom bomb. If he’s right, then we’re as safe as we have been since 1945, when the U.S. government dropped a nuclear bomb for the first and last time. Systems are in place to mitigate that risk, and new systems will arise to keep AI safe, too. Our future will remain bright if enough people pursue cybersecurity and related fields. With that in mind, learn to use this technology and prepare for the shift towards AGI.

    [ad_2]

    Dennis Consorte

    Source link