[ad_1]
Integrating the breakthrough tech was seen as more important than attracting talented new workers.
[ad_2]
Kit Eaton
Source link
[ad_1]
Integrating the breakthrough tech was seen as more important than attracting talented new workers.
[ad_2]
Kit Eaton
Source link
[ad_1]
I heard someone recently say you can’t mandate a mentality. That’s what I think about when I consider the intense push by company leaders to drive AI adoption among their employees. While I personally love AI and it’s been a force multiplier, I also recognize that not everyone is like me.
All said, if the goal is to drive adoption, In their fervor to win the AI race, I think many organizations have skipped several critical steps crucial for a successful effort.
That first step is change management — the structured approach to transitioning individuals, teams, and organizations from their current state to a desired future state. We talk all the time about change management in business parlance, but in our zeal to beat others out the door, these fundamental principles seem to be set aside.
That’s a mistake.
Research from McKinsey shows that 70 percent of change programs fail to achieve their goals, largely due to employee resistance and lack of management support. Adopting AI, like any other major initiative, is a change management process. Mandates are rarely universally accepted, and this top-down approach is often met with significant resistance.
I’ve written about effective change management and how to communicate change, but if we want to boil down the basics: tell people the who, what, when, why, and how with deep emphasis on “what’s in it for me,” “why are we doing this,” and “why will this help us.”
That’s not what many organizations are doing. Organizational leaders are shifting to AI with the rationale of “because I said so.” For many, that’s not sufficient as I often say, absent a narrative, people will create one. Leaders need to provide the why, the rationale, and give people the larger vision so they know how to engage with AI.
Transformation without adequate motivation is stagnation but transformation with shared vision becomes sustainable momentum.
So how do we go about it? The answer lies in thoughtfully addressing five fundamental questions before rolling out any AI initiative.
1. What: Define the Problem You’re Solving
The first question is: what are we solving for? If you don’t know what you’re solving for, how can you ask staff to embrace AI tools if you don’t even know what it’s leading to? First, figure out what you want to solve. That’s the “what.”
2. Who: Identify Your Audience
After you figure out what you’re solving for, you need to determine to whom it applies. AI is not a panacea, and there probably should be specific departments with legitimate use cases identified. For the problem you’ve defined, determine the audience who will be most impacted and who needs to be involved.
3. Why: Provide the Motivation
The next aspect is the “why.” People need inspiration, people need motivation, people need to understand why you’re asking them to do what you’re asking them to do.Treat people like adults and give them the reason(s). You can’t just say “because I told you so.” That’s empty, it’s unhelpful, and less than inspiring.
4. When: Establish Clear Timelines
Then there is the “when.” When are we trying to get it done? What’s the timeline for this? Because we know what the problem is and what we’re solving for, there should be a date for when we solve it or accomplish a milestone. If you can’t say when, then it remains open-ended forever, and that’s also less than inspiring.
5. How: Map Out the Execution
And finally, there is the “how.” This is probably the most underrated of the who, what, when, why, and how construct, but how are we going to do it? There should be clear instructions for how we’re going to achieve the goal. That means thinking about timelines, tools, milestones, rules, responsibilities, owners, contributors, and mapping that all out. People need to know what tools they are using and what those tools will help them achieve. And they may need to be trained on the tools.
What I see far too often is a tool morass, a chaotic proliferation of AI platforms and applications with no clear guidance on which tool serves which purpose, no integration between systems, and no coherent strategy. Employees become overwhelmed by the sheer number of options and paralyzed by uncertainty about which tool to use for their specific needs. This confusion breeds frustration and resistance, ultimately undermining the entire adoption effort.
Define the tools, the timeline, the anticipated outcomes, and the measures of success. This means investigating tools thoroughly, understanding how they interplay with existing systems, setting clear strategy and guardrails, and choosing company-right tools rather than a scattershot ‘AI everything’ approach.
You can’t mandate a mentality. You can’t force and reasonably expect people to embrace AI simply because leadership declares it’s important. What you can do is create the conditions for meaningful adoption by treating your people like adults, giving them context, purpose, clear guidance, and a compelling reason to change.
The organizations that will win the AI race aren’t the ones that move fastest out the gate with mandates and pressure. They’re the ones that take the time to bring their people along on the journey, building genuine buy-in and capability at every level. That’s not just good change management, it’s smart leadership.
The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.
The final deadline for the 2026 Inc. Regionals Awards is Friday, December 12, at 11:59 p.m. PT. Apply now.
[ad_2]
Bernard Coleman
Source link
[ad_1]
Samsung Electronics is expected to post its highest third-quarter profit since 2022, driven by higher memory chip prices supported by server demand as customers rebuild inventories, analysts’ estimates showed.
The world’s biggest maker of memory chips is projected to report an operating profit of $7.11 billion for the July-September period, according to LSEG SmartEstimate from 31 analysts, which is weighted toward those who are more consistently accurate. This would be up 10 percent from a year earlier.
Analysts attributed the recovery mainly to better conventional memory chip pricing, which would offset weaker sales volumes of high-bandwidth memory (HBM) chips as Samsung has yet to supply its latest HBM products to Nvidia.
HBM chips, critical for artificial intelligence (AI) development, are designed to reduce power consumption and process large datasets by stacking chips vertically.
Analysts said demand for memory chips, particularly from hyperscalers and AI-related investments for services such as ChatGPT, have put more workload on general servers, thus boosting conventional memory chip prices.
Prices of some DRAM chips, widely used in servers, smartphones and PCs, jumped 171.8 percent in the third quarter from a year earlier, according to TrendForce data.
While Samsung’s conventional memory business performed well, analysts said delays in supplying its latest 12-layer HBM3E chips to Nvidia have hurt its profit and share price.
Rivals SK Hynix and Micron have gained more from AI-driven demand, while Samsung’s exposure to China, where advanced chip sales are restricted by the United States, has constrained its growth.
Analysts said market sentiment toward Samsung’s shares and chip business, including both memory and contract chip manufacturing, is expected to improve as it secures supply deals with major customers such as OpenAI and Tesla.
Samsung shares have risen more than 43% following its announcement of a chip supply deal with Tesla in July.
During OpenAI CEO Sam Altman’s visit to South Korea earlier this month, Samsung, SK Hynix and OpenAI announced partnerships to supply advanced memory chips to the Stargate project.
The AI chip deal between OpenAI and AMD, one of Samsung’s major HBM customers, would also benefit Samsung, said Ryu Young-ho, a senior analyst at NH Investment & Securities.
Ryu added that Samsung’s $16.5 billion foundry deal with Tesla has lifted expectations that Samsung’s struggling contract chip manufacturing business could win more orders from major tech firms if the company delivers the project as planned.
While recent AI-driven supply deals signal a positive outlook for Samsung, analysts cautioned that uncertainties remain, including potential U.S. tariffs on chips and China’s tightened export controls on rare earth materials used in advanced chips and manufacturing equipment.
In September, Micron said it expects to sell out all of its HBM chips for calendar year 2026 in the coming months due to strong demand.
Samsung will announce its estimates on revenue and operating profit on Tuesday, with full results due later this month.
Reporting by Heekyong Yang; Editing by Jacqueline Wong
[ad_2]
Reuters
Source link
[ad_1]
Called Reflection AI, the company is now valued at about $8 billion, up some 15-fold from last March, when it announced $130 million in funding. The company is less than two years old.
Reflection, which launched in March 2024, originally aimed to build a “superintelligent autonomous coding system,” and use that as a jumping off point. Now, it is working on building an open alternative to the types of closed frontier models that giants like OpenAI are developing. In other words, Reflection wants to be the U.S. answer to China’s DeepSeek.
“AI is becoming the technology layer that everything else runs on top of,” Reflection noted in a blog post about the funding. “But the frontier is currently concentrated in closed labs. If this continues, a handful of entities will control the capital, compute, and talent required to build AI, creating a runaway dynamic that locks everyone else out.”
U.S. AI and crypto czar David Sacks praised Reflection on Thursday. “It’s great to see more American open source AI models. A meaningful segment of the global market will prefer the cost, customizability, and control that open source offers. We want the U.S. to win this category too,” he posted on social media platform X.
Aside from remaining globally competitive, Reflection says there are numerous benefits to frontier open intelligence, including safety, transparency, and accountability. (Frontier in this case refers to the most advanced, large-scale LLMs, like those currently in development behind closed doors at companies like OpenAI.) But it also flags the potential for misuse. High profile players in the space, like OpenAI’s Sam Altman, have publicly fretted about bad actors weaponizing AI; another concern is that others in the space are not putting in place adequate safeguards—even as Altman pushes to avoid regulation. OpenAI has since announced it is working on its own open model.
“We believe the answer to AI safety is not ‘security through obscurity’ but rigorous science conducted in the open, where the global research community can contribute to solutions rather than a handful of companies making decisions behind closed doors,” Reflection’s blog says.
The startup has spent the past year assembling a crack team of experts that have “pioneered breakthroughs including PaLM, Gemini, AlphaGo, AlphaCode, AlphaProof, and contributed to ChatGPT and Character AI, among many others.” Its founders, Misha Laskin and Ioannis Antonoglou, worked on DeepMind’s Gemini and Go-playing AI AlphaGo, respectively.
The company also noted that it developed a large language model and “reinforcement learning platform capable of training massive Mixture-of-Experts (MoEs) models at frontier scale.” TechCrunch reported that MOE models are a type of architecture that powers these super advanced, frontier LLMs.
“We saw the effectiveness of our approach first-hand when we applied it to the critical domain of autonomous coding. With this milestone unlocked, we’re now bringing these methods to general agentic reasoning,” the blog states.
Reflection also stated it has come up with a commercial model that will allow the company to sustain itself, while developing frontier models. It aims to release its first model early next year, TechCrunch reported.
For more on the difference between closed AI models and those that are open-weight, check out this explainer.
[ad_2]
Chloe Aiello
Source link

[ad_1]
Opinions expressed by Entrepreneur contributors are their own.
While AI has been in existence since the 1950s, it has only just recently become a household name with the introduction of OpenAI’s ChatGPT less than two years ago. As an investor in both OpenAI and xAI, I continuously ask myself the ethical ramifications of this new technology, how intelligent it is and whether it is the best path forward for humanity. As an experiment, I asked ChatGPT its opinion about artificial intelligence (AI) systems in today’s society, and this is what it had to say:
“As an artificial intelligence language model, I do not have personal opinions or beliefs. However, I can provide an objective analysis of AI systems in today’s society. AI has brought about numerous benefits to society, such as improved efficiency, accuracy, and decision-making processes. It has also contributed to advancements in medical research, transportation, and various other industries. However, the deployment of AI systems in today’s society also presents potential risks, such as job displacement, privacy concerns, and the possibility of biased algorithms. Therefore, it is crucial to ensure that AI is developed ethically and responsibly to mitigate these risks and ensure that the technology benefits society as a whole. In summary, AI systems have immense potential to positively impact society, but their deployment must be done with careful consideration and ethical responsibility.”
While this response shows AI is still robotic and objective in its methods of “thinking,” it is also, in fact, intimately aware of the ethical concerns bearing over its use. At this critical point in history, where we sit at the precipice of mass AI adoption, we still can’t seem to decide whether AI will be our savior or our villain. Ultimately, it comes down to the applications and responsible management of AI systems.
As an investor, I am in the unique position of being able to help push these new AI companies forward. In the last two years, we have seen a “gold rush” of AI startups. Many have raised impressive rounds; many have faltered, and some have already seen acquisitions. However, the first two I have invested in OpenAI and xAI are now well-known. The former for its ability to have tackled the challenge of global mass adoption and the latter for its commitment to building a more ethical system for the AI ecoverse.
When it comes to applications in AI, it’s safe to say that humans have always strived to create tools to make our lives easier; this is not new. Historically, “mechanical muscles” and outsourcing shifted physical labor. Now, we’re at the point where “mechanical minds” will unleash a cascade of applications across industries. Automation has evolved from more than just the physical.
One major development pushing forward the AI adoption frontier is Microsoft’s continued investment in OpenAI. This partnership will likely lead to everything from optimized Excels to AI-generated PowerPoints and even more support in email management. Upper-class work is now also ripe for disruption.
In finance, AI can be leveraged in various ways. AI algorithms can identify patterns of behavior, such as unusual transactions or identity theft, allowing financial institutions to detect and prevent activities quickly. AI can also analyze market trends and make assumptions about future trends, assisting institutions in making more accurate decisions.
While this is only one sample of industry transformation, there are some areas of work that will be impacted less by AI. These industries are those operating in information asymmetry — such as early-stage venture capital. For AI to work effectively, the model must have access to data. Industries whose data is private, segregated and complex cannot train and build inferences in the same way that a model based on publicly available information can. The strength of AI is dependent on the value of its underlying data and model, which in turn is dependent on the quality of the rules set forth by the humans crafting the algorithms.
In terms of management, we must continue to maintain an element of skepticism and criticism as the growing adoption of AI increases. These tools will continue to develop, but they should not be treated as an all-knowing source of truth. Again, this is critical because these systems only know what is in their underlying systems. As people, investors and business-minded individuals, we must acknowledge the strength of these systems while also considering the fact that they must be constantly maintained. While AI employs a constantly evolving algorithm that learns from itself and experiences, we must still continue to adjust the rules and data sources supporting these AI systems.
Related: AI Will Radically Transform the Workplace — Here’s How HR Teams Can Prepare for It
Diversity of thought and perspectives is critical for those who have the power to develop these systems. A system created by humans with biases will intrinsically be biased as well. We must maintain human values in control of the systems. In today’s society, where truth seems to be subjective, belief in these systems can be both extremely advantageous and extremely detrimental. We must refrain from programming systems to tell people what they want to hear. It is important for people to understand and respect the limitations of AI.
At the same time, just because humanity is capable of doing something doesn’t always mean it should. We could likely replace most jobs with AI, but should we? Where do we draw the line? How do we ensure honesty and integrity in future systems and platforms? This is part of what xAI and OpenAI have committed to tackling and why I have believed in and supported them from the beginning.
AI will undoubtedly transform all of our lives, but this transformation will only be positive if we continue to be critical stewards of truth and information. As investors, I encourage everyone to maintain a healthy dose of skepticism when investing in AI-powered solutions. Look at the human beings who stand behind those systems because their beliefs are the ones teaching and driving the solution.
[ad_2]
Ozi Amanat
Source link

[ad_1]
A recent study by Lenovo has revealed a notable divide between CEOs and CIOs concerning the strategic deployment and scaling of Artificial Intelligence (AI) within companies. This issue emerges amidst the rapid AI adoption in ASEAN markets, including the Philippines, where AI has been growing at a compound annual growth rate of about 40%.
Driven by a fear of falling behind competitors, many executives are aggressively pursuing AI integration, resulting in plans to increase AI spending by 45% in 2024 compared to the previous year. CIOs, however, are grappling with more immediate operational challenges such as cybersecurity, and talent retention which are further complicated by emerging technologies like Generative AI (GenAI). With limited budgets and a risk-sensitive outlook, 9% of CIOs even view AI initiatives as a potential distraction.
Lenovo’s comprehensive survey involved 900 IT and business decision-makers and was particularly focused on the impact of GenAI. The study provides critical insights as to why there are differing outlooks within the C-suite regarding the technology’s potential and challenges. Some of the key insights from the study include:
“We recognize the challenges that come with adopting AI as much as we also acknowledge the immense potential it can bring to our business and people,” says Michael Ngan, general manager of Lenovo Philippines. “As the landscape of AI rapidly evolves, it’s imperative for organizations to navigate the complexities with a unified vision,” he shared.
Lenovo’s solid infrastructure solutions, together with a vast network of independent software vendors (ISVs), guarantee seamless and adaptable AI implementations suitable for companies of varying sizes. Moreover, through Lenovo’s AI Innovators Program, a collaboration with leading software partners, Lenovo delivers tailor-made, ready-to-implement AI solutions that span the entirety of customer operations.
Last year, the global service provider unveiled its comprehensive vision “AI for All” at the 9th Global Tech World Event in Austin, Texas. For more information, visit www.lenovo.com/ph/en/.
[ad_2]
Gadgets Magazine 17
Source link