ReportWire

Tag: ai unemployment

  • The Great Tech Worker Burnout Has Begun

    Let me ask you a question: Are you burned out at your job?

    Yeah, you’re putting on a brave face. Slinging that code, leaning into the new AI frontier, closing ticket after ticket, hitting those milestones. Disrupting at scale!

    You’re fine. You’re great! Well, at least your boss thinks you’re fine. Good job.

    So wait. Let me ask you another question, one that we both don’t already know the answer to. 

    Why are you burned out?

    Because 80 percent of you are, at least a little. That’s four out of every five of you. 

    Yeah, Dave, good to hear that things are really going well for you. Be quiet for a second. 

    What’s more, at least one out of you five is at “critical levels” of burnout. 

    Shut the hell up, Dave, we don’t need your advice right now. We already know how pickleball changed your life.

    What we do need is focus on the real reasons for tech burnout, what it’s going to cost the tech industry, and how to recover.

    Techies Fought The Law and the Law Won

    Last year at about this same time, I penned a rather snarky piece on the beginnings of the Great Tech Revolution. I listed all the reasons why folks like you and me had had enough. Enough of the RTO mandates, enough of the AI-replacement threats, enough of being told to do impossibly more with increasingly less and expecting double digit growth on top of it.

    But, unfortunately, I know how tech folks operate. And I called it. The Great Tech Revolution sputtered out like a memory leak finally eating the last of the available RAM. 

    Yeah, like three people laughed at that. Silently. But I know my audience.

    The reason why the coup failed? That aforementioned WORST TECH JOB MARKET EVER. Yeah. And I will keep shouting about it because no one is really talking about it in those terms – until they get canned and join the massive and ever-growing army of the undead who have been unemployed for a year or more

    I actually had one commenter refer to laid-off tech employees as the “laptop elites.” Believe me kids, there is no empathy out there for us, because we’re all “tech bros,” every last one of us. You too, Rachel. You’re a tech bro.

    While my reasons for the burnout revolved around what I scientifically referred to as “dick moves” from management and leadership, most of the articles documenting the Great Tech Burnout focus on organizational chaos, vague requirements, shifting goals, and so on.

    I have no argument with that. It’s really all the reasons. Like, think of a reason. Yeah, it’s that too. They even threw remote work in there because apparently none of us know how to set our own boundaries

    Look, I’ve got enough experience inside and outside of the executive washroom to know that when the problem is everything, the problem is that no one knows what the problem is.

    So here’s the real problem:

    Tech burnout is everything, but it’s also happening everywhere, all at once. The burnout and its resulting hellfire of unproductive humans is overwhelming management and leadership. No one has a solution, so they’re throwing band-aids at a bullet wound and everyone is acting like it’ll be fine.

    Did I just say what you were thinking?

    The True Cost of Software Developer Burnout 

    It’s about $1 trillion a year.

    What, you were expecting some pithy emotionally-driven wisdom as the “true cost,” like the wings of a technical angel being unceremoniously clipped?

    No dude. Sandblasted tech workforces cost companies real dollars. Lots of them.

    Let me do some quick math. Last year, the cost of tech worker burnout was pegged at $1 trillion. Did things get better or worse this year?

    A + B = I need to eat this entire bag of chocolate bars.

    So what do we do about it? Well, not what we’ve done, that’s for sure.

    Depressed? There’s An App For That!

    We’re already spending over $100 billion collectively on wellness apps and programs

    How’s that working out for you?

    And Dave’s got his pickleball. 

    But here’s something I learned as soon as they let me use the executive washroom. 

    When you don’t know the problem, you should use soft, vague qualitative terms to discuss the cause, based on symptoms –  high workloads, inefficient processes, unclear goals and targets, constant context switching – those all sound like horrible things that nobody likes and we should really tackle them head-on with our full organizational effort.

    Let’s put a committee on it. In the meantime, buy Calm for everyone. 

    But wait.

    High workloads, inefficient processes, unclear goals and targets, constant context switching… what causes those horrible things?

    I’ll just… leave this mic right here on the floor.

    We’re Moving Through the Grief Cycle

    Tech is dying. We’re collectively killing it when our leaders focus on short-term returns, AI as a solution for all the problems, and the numerical quantification of every possible productivity metric.

    I went to school for Industrial Engineering. We focused a lot on efficiency. Then in the real world I realized that efficiency is something you do for a little while to maintain a growth peak for as long as you can. It’s not what you do to get off the floor.

    Why are we so burned out? We’re grieving growth and we’re grieving innovation. Anyone who got into tech to fire the rocket has moved through the grief process from anger straight to depression.

    We skipped bargaining, because we don’t have leverage

    But we kinda do! Our collective moping is costing our employers a freaking fortune!

    That’s not a victory. I’ll close with a little story. 

    Don’t Let Spite Lead You, Because It’ll Ground You

    Look, when I’m in a failure cycle, which is often, I wake up every day and before long, one of the first thoughts that creeps into my head is, “They’re all gonna pay.”

    Don’t get me wrong. I don’t know who they are. And I don’t know how they’ll pay. But if I stopped and thought about it, I would admit that it’s probably coming from a place where I know they are going to acknowledge my talents, see my drive, and take seriously my, whatever, my brilliant ideas – and they will actually pay me money for all that. 

    It could also mean anger is a great motivator, when channeled correctly. 

    One time I got so mad at a potential business partner who last-minute surprise-scrubbed a super lucrative and company-saving deal with one of my startups, that I immediately went out and co-invented one of the first generative AI platforms. It was a five year long game, but everybody won. I mean everybody. 

    Except the former potential business partner. 

    But at that point, I didn’t care. They congratulated me and apologized to me years later, and it meant absolutely nothing. 

    After that, I realized burnout was just anger without a channel.

    You have talent. You have drive. And some of your ideas are not terrible. In over hundreds of years of business history, that cocktail eventually wins. It’s up to you, not them, to figure out how to channel all that into your own success.

    Join the rebel alliance of over 10K tech professionals on my email list. At least we’ll all get a good laugh out of it.

    The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.

    Joe Procopio

    Source link

  • Why Tech Workers Don’t Trust AI

    It’s expected that tech companies are going to wind up spending a whopping $1.5 trillion on AI in 2025.

    That’s a lot of chatbots.

    Amazon is allegedly leading the way at $100 billion and major players Oracle, Microsoft, Meta, and Alphabet round out a group of megacaps that will plunk down $320 billion, more than a fifth of the overall spend.

    It makes that $20 for Claude Pro seem like a bargain, right?

    Let me get serious for a moment, because this trillion-dollar AI money train might be speeding towards a wall of employee mistrust. And the resulting trainwreck might make the cash-fueled NFT bonfire look conservative.

    Let’s talk about the tech employee AI trust gap.

    Why Tech Workers Don’t Trust Workplace AI

    I’ve been a builder of AI tools since 2010, and I’ve been sounding the alarm on how we’ve been selling AI to both business and consumers since ChatGPT debuted in 2022. 

    There’s a massive disconnect brewing between sellers of AI and buyers of AI, because while executives continued to rubber-stamp high-dollar AI investments, more than half of all workers didn’t trust their workplace AI to benefit them. Thus, a strong-but-hidden employee AI resistance was established.

    Over the last six months or so, an interesting phenomenon is happening across what we’re now calling the AI sector. As more regular people like you and me have had more time to react to the integration of these AI tools into our lives, we’re finally able to figure out its limitations, where it should be used, and more importantly, where it shouldn’t.

    As this was happening, more and more AI experts have been speaking out on everything from the true definition of artificial intelligence to the corners being cut in the name of grabbing early AI market share, to the underwhelming return on corporate investment in AI – if you want to call 95 percent of companies seeing no return whatsoever as “underwhelming.”

    That NFT metaphor doesn’t seem so stupid now, does it. 

    And of course, AI platform developers see mistrust as a huge threat. They know that no matter how groundbreaking their technology might be – and make no mistake, this is groundbreaking technology – if the market doesn’t trust it, they’ll reject it. 

    So I’m speculating here, but suddenly, maybe six months or so ago, your AI chatbot started agreeing with you when you tell it you think it’s hallucinating, or performing bad math, or just making shit up. It will agree with you, apologize, and then take another crack at your request. 

    This worked, but it didn’t. Just because someone tells you when they’re working against you doesn’t mean you’ll trust them when they tell you they’re working for you.

    “Performative Acceptance” is the New Normal

    So now we’re in the middle of a whirlpool-style cycle.

    Companies are promoting, even forcing AI adoption in an effort to justify those massive investments. Employees, working in the shadow of maybe the worst tech job market in history, are performing implementation theater – using the AI, giving the boss a smile and a thumbs up, and putting “machine learning expert” on their resume, while quietly waiting for a frozen job market to thaw.

    This performative acceptance is playing out in two primary ways, acted out by two completely different types of tech employees. First, you’ve got the quiet corner developer:

    “Every week one of my friends announces on LinkedIn that they got laid off,” says “Tammy,” a senior software developer at a mid-sized tech company in the middle of the country. “I will just do whatever they tell me to. If they want me to use AI when I code, I’ll do it. It’s helpful in some ways, but it really isn’t making me more productive. If my productivity drops, I could lose my job. So I have to play a game of showing how AI is making me more productive.”

    Tammy is sugar-coating it. “Bobby,” a sales engineer for a Fortune 500 tech giant, does not:

    “If I wasn’t so angry it would be funny.” he chuckled. “You want me to waste time training AI to do my job, watch it do a shitty job, but then tell you how amazing it is, so you can replace me with it? This is my life now, Joe. I’m living the dream.”

    These are just two cases. They’re anecdotal. They prove nothing. But they highlight a bevy of AI implementation mistakes that need to be undone.

    Filling The AI Trust Gap

    Every new technological advancement comes with its own share of overselling in the beginning. The problem is that for this new AI cycle, the overselling was more like a manic threat:

    “INJECT AI DIRECTLY INTO YOUR VEINS OR YOU WILL DIE PENNILESS AND FULL OF SHAME!”

    In that FOMO-fueled race to AI adoption, leadership bought into AI promises without involving the employees who would actually use it. Then companies spent billions on AI tools but skipped the part where they engaged with their workforce to best adopt those new tools. Now many of those same companies are wasting billions justifying those decisions at the expense of a friction-filled workplace.

    There’s a lot of resentment here, and the job market won’t stay frozen forever. When it thaws, resentment always turns into resignations.

    As the fascination with generative AI dies down, and the limitations of “vibe coding” are becoming understood, more tangible concepts like AI Automation tools, agentic AI, and even neural-network-driven decision making are starting to drive the AI hype talk from “AI can run your entire company” to “AI can do cool things in the hands of the right people.”

    This is a second chance. To sell AI as less about “machines that think” and more about “really fast computing.” The latter, I can assure you, is the best definition of artificial intelligence you’re gonna get.

    Tech employees aren’t children. They’ve already figured this out. If we want to fill the AI trust gap, it’s time to start being reasonable about what the AI endgame should really be. Otherwise, untrusting employees who see AI as a liability and not a benefit will end up going somewhere that will invest in them, not chatbots. And those AI-first companies will discover their billion-dollar AI investments created resentment instead of productivity.

    If you found yourself agreeing with this, or not, please join my email list and get a quick heads up when I write something.

    The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.

    Joe Procopio

    Source link

  • CEOs may not realize it, but they already know what to do about A.I.

    CEOs may not realize it, but they already know what to do about A.I.

    A.I. has arrived, and CEOs are asking what to do. The answer might surprise them: Do what you know best.

    It’s a safe bet that various forms of artificial intelligence, from algorithmic decision-support systems to machine learning applications, have already made their way into the front and back offices of most companies. Remarkably, generative A.I. is now demonstrating value in creative and imagination-driven tasks.

    We’ve seen this movie before. The Internet. Mobile. Social media. And now artificial intelligence. With each, the business has been confronted with a new technology that holds both great promise and considerable uncertainty, adopted seemingly overnight by consumers, students, professionals, and businesses.

    CEOs recognize the challenge. If they take a wait-and-see approach or simply clamp down on A.I. use, they risk missing a historic opportunity to supercharge their products, services, and operations. On the other hand, allowing the new technology to proliferate within their companies in uncoordinated, even haphazard, ways can lead not only to duplication and fragmentation, but to something much more serious: irresponsible uses of A.I., including the perpetuation of biases, amplification of misinformation, and inadvertent release of proprietary data.

    What to do? A.I. is evolving so rapidly that there is no definitive playbook. But most of today’s CEOs have learned valuable lessons from prior technology inflection points. We believe they are well-equipped to apply three basic lessons:

    Data governance must become data and A.I. governance

    Governance may sound to some like heavy-handed, top-down oversight. But this is not about choosing either centralization or decentralization. It’s about developing company-wide approaches and standards for critical enablers, from the technology architecture needed to support and scale A.I. workloads to the ways you ensure compliance with both regulation and your company’s core values. Without enterprise consistency, you won’t have a clear line of sight into your A.I. applications, and you can’t enable integration and scaling.

    You don’t have to start from scratch. Most companies have established data governance to ensure compliance with data privacy regulations, such as the EU’s GDPR. Now, data governance must become data and A.I. governance.

    A.I. applications and models throughout the company should be inventoried, mapped, and continuously monitored. Most urgently, enterprise standards for data quality should be defined and implemented, including data lineage and data provenance. This involves where, when, and how the data was collected or synthesized and who has the right to use it. Some A.I. systems may be “black boxes,” but the data sets selected to train and feed them are knowable and manageable–in particular for business applications.

    Employees don’t need to become data scientists–they need to become A.I.-literate

    History teaches us that when a technology becomes ubiquitous, virtually everyone’s job changes. Here’s an example: The first project of the Data & Trust Alliance–a consortium we co-chair that develops data and A.I. practices–targeted what some might consider unlikely parts of our companies, human resources and procurement.

    The Alliance developed algorithmic safety tools–safeguards to detect, mitigate and monitor bias in the algorithmic systems supplied by vendors for employment decisions.

    When the tools were introduced to HR and procurement professionals, they asked for education, not in how to be a data scientist, but how to be A.I.-literate HR and procurement professionals. We shared modules on how to evaluate the data used to train models, what types of bias testing to look for, how to assess model performance, and more.

    The lesson? Yes, we need data scientists and machine learning experts. But it’s time to enhance the data and A.I. literacy of our entire workforce.

    Set the right culture

    Many companies have adopted ethical A.I. principles, but we know that trust is earned by what we do, more than by what we say. We need to be transparent with consumers and employees about when they are interacting with an A.I. system. We need to ensure that our A.I. systems–especially for high-consequence applications–are explainable, remain under human control, and can withstand the highest levels of scrutiny, including the auditing required by new and proposed regulations. In short, we need to evolve our corporate cultures for the era of A.I.

    Another project by the Alliance was to create “new diligence” criteria to assess the value and risk inherent in targeting data–and A.I.-centric companies for investment or acquisition. The Alliance created Data Diligence and AI Diligence, but the greatest need was for Responsible Culture Diligence–ensuring that values, team composition, incentives, feedback loops, and decision rights support the new and unique requirements of A.I.-driven business. 

    CEOs have been here before. For some companies, it took decades and a pandemic to fully realize that “digital transformation” implicated every part of the company and its relationships with all stakeholders. And what were the results of misreading the Internet, mobile, and social? Disrupted business models and loss of competitiveness, as well as unintended consequences for society.

    What will be the result of getting this one wrong? We could miss a once-in-a-generation opportunity to achieve radical breakthroughs, solve intractable problems, delight customers, empower employees, reduce waste and errors, and serve society. Far worse, we risk doing harm to our stakeholders and to future generations.

    A.I. is not solely–indeed, not most importantly–a technology challenge. It is the next driver of enterprise transformation. It’s up to the CEO, board, and the entire C-suite to lead that. And the time to do so is now.

    Kenneth I. Chenault and Samuel J. Palmisano are founders and co-chairs of the Data & Trust Alliance, a not-for-profit organization whose 25 cross-industry members develop and adopt responsible data and AI practices. Members include CVS Health, General Catalyst, GM, Humana, Mastercard, Meta, Nike, Pfizer, the Smithsonian Institution, UPS, and Walmart. Chenault is the chairman and managing director of General Catalyst and the former chairman and CEO of American Express. Palmisano is the former chairman and CEO of IBM.

    The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

    More must-read commentary published by Fortune:

    Kenneth I. Chenault, Samuel J. Palmisano

    Source link