ReportWire

Tag: ai technology

  • Krieger: AI has arrived for Long Island’s development community | Long Island Business News

    [ad_1]

    In Brief:
    • AI enhances by predicting occupancy and environmental needs for improved comfort and efficiency.
    • HVAC systems use AI to optimize energy consumption, processing real-time data on temperature, humidity, and weather.
    • AI monitors equipment and lighting, detecting issues early and adjusting usage for cost savings.
    • Construction design benefits as AI integrates data to improve planning, schedules, and overall productivity.

    More than 25 years ago, the Long Island development community was hailing the introduction of what was described as “the smart building.” One could program a system to say: “If ambient temperature exceeds 72 degrees Fahrenheit, turn on the air conditioning” or “turn off the lights at 6 p.m.”  It was thought to be revolutionary in controlling costs, providing for tenant or visitor comfort, and monitoring energy consumption.

    If that was a “smart building” in the year 2000, hold onto your hats in 2026.

    AI is taking the smart building and giving it a doctorate.

    Far from the monitoring devices that were reactive in a smart building, AI is creating a learning curve that can predict needs, respond accordingly and then provide a level of efficiency that was unimaginable just several short years ago. It’s akin to comparing a programmable thermostat to a system that now understands the thermal dynamics of your entire building and anticipates your needs before you are even aware of them.

    The most significant impact of AI will likely be in the area of heating, ventilation and air conditioning, which typically consume 40% to 50% of a building’s energy. Today’s AI systems are beginning to process data from sensors integrated into the design before the foundation is poured. These sensors will send real-time information to AI regarding temperature, humidity, occupancy and even outdoor weather conditions.

    AI will then predict patterns based on occupancy data, anticipating when spaces will be used, and then direct the HVAC system to achieve maximum efficiency while protecting comfort levels. Its learning algorithms will continue to refine its response, adapting to changing weather and whether the space was actually used as AI initially predicted.

    But that is not AI’s only use. Building owners know that traditional maintenance follows fixed schedules. The problem with this management tactic is that equipment could be replaced prematurely or, worse, the schedule misses the early warning signs of pending failure. AI systems will continuously monitor equipment performance. It is capable of detecting even subtle warning signs such as electric motors that seem a little “off,” fluctuations in electrical systems, or unexpected plumbing pressures. It is akin to having an all-knowing custodian with superpowers who is working 24/7.

    Lighting systems will also be integrated with AI going far beyond the occupancy sensors that currently turn lights on and off when they recognize motion in a room. AI is now capable of reviewing space utilization data so that it understands how different areas are used throughout the day and provide light accordingly. That means a reduction in energy costs.

    , and its president Jon Weiss, will spend many hours working on construction designs that will eventually become steel and concrete. AI systems have become an associate that can take what appears to be unconnected disciplines and create a holistic approach to building design and construction schedules. In the year to come it will not only demonstrate the ability to accumulate more information but to learn from that data and then offer solutions that create efficiencies and improve productivity at the construction site.

    We are looking at an era when computing power is expected to become cheaper while AI becomes more sophisticated.

    For developers welcoming in 2026, AI is offering an essential tool for design, construction and function that is ensuring smart buildings will now come with an advanced diploma.

     

    Steven Krieger is CEO of B2K Development in Jericho, whose family of companies includes B2K Construction.


    [ad_2]

    Opinion

    Source link

  • Disney is investing $1 billion in OpenAI and licensing its characters for Sora

    [ad_1]

    (CNN) — Disney is taking a $1 billion equity stake in OpenAI, while also striking a deal that would allow its famous characters be used on Sora, the AI company’s video generation platform.

    Disney’s investment in OpenAI is the first such major licensing agreement for Sora.

    Under the agreement, users of OpenAI’s shortform video-generating social media network Sora will be allowed to make videos using more than 200 Disney animated characters. Those characters including Mickey and Minnie Mouse, Disney Princesses like Ariel, Belle, and Cinderella, characters from Frozen, Moana, and Toy Story. Animated characters from Marvel and Lucasfilm, including Black Panther and Star Wars characters like Yoda are included as well – although the agreement does not include any talent likenesses or voices.

    Users of OpenAI’s popular chatbot ChatGPT will also be able to ask the bot to create images using the Disney characters.

    “The rapid advancement of artificial intelligence marks an important moment for our industry, and through this collaboration with OpenAI we will thoughtfully and responsibly extend the reach of our storytelling through generative AI, while respecting and protecting creators and their works,” Disney CEO Robert A. Iger, CEO said as part of a statement.

    OpenAI, which has come under scrutiny for copyright violations – and also for striking massive ‘circular’ deals leading to fears of an AI bubble – said the deal shows how the creative community and AI can get along.

    “Disney is the global gold standard for storytelling, and we’re excited to partner to allow Sora and ChatGPT Images to expand the way people create and experience great content,” said Sam Altman, co-founder and CEO of OpenAI. “This agreement shows how AI companies and creative leaders can work together responsibly to promote innovation that benefits society, respect the importance of creativity, and help works reach vast new audiences.”

    Shortly after the announcement, Iger and Altman both sat down with CNBC’s David Faber, during which the Disney boss stressed that the deal “does not, in any way, represent a threat to the creators.”

    “In fact, the opposite, I think it honors them and respects them, in part because there’s a license fee associated with it,” Iger said, later adding that the goal is to “continue to honor, respect, value the creative community in general.”

    Iger also stressed that the deal allows Disney to “be comfortable that OpenAI is putting guardrails essentially around how these are used,” adding that, “really, there’s nothing for us to be concerned about from a consumer perspective.” Altman, too, stressed the presence of guardrails, telling Faber that “it’s very important that we enable Disney to set and evolve those guardrails over time, but they will, of course, be in there.”

    The deal is exclusive, per Iger, at least in part. The Disney CEO hinted that “there is exclusivity, basically, at the beginning of the three-year agreement,” but remained mum on what that means. Asked if OpenAI is pursuing similar deals with other companies, Altman said, “I won’t rule out anything in the future, but we think this alone is going to be a wonderful start.”

    Disney has previously sued AI companies for using their intellectual property. On Monday, the company sent Google a cease and desist letter, according to a source familiar with the situation.

    The cease and desist letter claims the company’s AI products, including its image and video generating products Veo and Nano Banana, are infringing Disney’s copyrights “on a massive scale,” by allowing users to create images and videos depicting their characters. The letter alleges that Google has “refused to implement any technological measures to mitigate or prevent copyright infringement.”

    In response, a Google spokesperson said they have “a longstanding and mutually beneficial relationship with Disney, and will continue to engage with them.”

    More generally, we use public data from the open web to build our AI and have built additional innovative copyright controls like Google-extended and Content ID for YouTube, which give sites and copyright holders control over their content.”

    Disney had already sent similar cease and desist letters to Meta and Character.AI. In June, Disney and Universal sued AI photo generation company Midjourney, alleging the company violated copyright law.

    This story has been updated with additional developments and context.

    [ad_2]

    Hadas Gold and CNN

    Source link

  • Nearly a third of American teens interact with AI chatbots daily, study finds

    [ad_1]

    New York (CNN) — Nearly a third of US teenagers say they use AI chatbots daily, a new study finds, shedding light on how young people are embracing a technology that’s raised critical safety concerns around mental health impacts and exposure to mature content for kids.

    The Pew Research Center study, which marks the group’s first time surveying teens on their general AI chatbot use, found that nearly 70% of American teens have used a chatbot at least once. And among those who use AI chatbots daily, 16% said they did so several times a day or “almost constantly.”

    AI chatbots have been pitched as learning and schoolwork tools for young people, but some teens have also turned to them for companionship or romantic relationships. That’s contributed to questions about whether young people should use chatbots in the first place. Some experts have worried that their use even in a learning context could stunt development.

    Pew surveyed nearly 1,500 US teens between the ages of 13 and 17 for the report, and the pool was designed to be representative across gender, age, race and ethnicity, and household income.

    ChatGPT was by far the most popular AI chatbot, with more than half of teens reporting having used it. The other top players were Google’s Gemini, Meta AI, Microsoft’s Copilot, Character.AI and Anthropic’s Claude, in that order.

    A nearly equal proportion of girls and boys — 64% and 63%, respectively — say they’ve used an AI chatbot. Teens ages 15 to 17 are slightly more likely (68%) to say they’ve used chatbots than those ages 13 to 14 (57%). And usage increases slightly as household income goes up, the survey found.

    Just shy of 70% of Black and Hispanic teens say they’ve used an AI chatbot, slightly higher than the 58% of White teens who say the same.

    The findings come after two of the major AI firms, OpenAI and Character.AI, have faced lawsuits from families who alleged the apps played a role in their teens’ suicides or mental health issues. OpenAI subsequently said it would roll out parental controls and age restrictions. And Character.AI has stopped allowing teens to engage in back-and-forth conversations with its AI-generated characters.

    Meta also came under fire earlier this year after reports emerged that its AI chatbot would engage in sexual conversations with minors. The company said it had updated its policies and next year will give parents the ability to block teens from chatting with AI characters on Instagram.

    At least one online safety group, Common Sense Media, has advised parents not to allow children under 18 to use companion-like AI chatbots, saying they pose “unacceptable risks” to young people.

    Some experts have also raised concerns that the use of AI for schoolwork could encourage cheating, although others say the technology can provide more personalized learning support.

    Meanwhile, AI companies have pushed to get their chatbots into schools. OpenAI, Microsoft and Anthropic have all rolled out tools for students and teachers. Earlier this year, the companies also partnered with teachers unions to launch an AI instruction academy for educators.

    Microsoft, in particular, has sought to position its Copilot as the safest choice for parents, with AI CEO Mustafa Suleyman telling CNN in October that it will never allow romantic or sexual conversations for adults or children.

    [ad_2]

    Clare Duffy and CNN

    Source link

  • OpenAI’s Sora bans Martin Luther King Jr. deepfakes after his family complained

    [ad_1]

    New York (CNN) — OpenAI announced that it has “paused” users’ ability to generate videos of Martin Luther King Jr. on its artificial intelligence video tool Sora, following backlash over “disrespectful depictions.”

    “While there are strong free speech interests in depicting historical figures, OpenAI believes public figures and their families should ultimately have control over how their likeness is used,” the company said in a Thursday statement posted on X. “Authorized representatives or estate owners can request that their likeness not be used in Sora cameos.”

    The change comes a few weeks after the launch of Sora 2, which lets users make realistic-looking AI-generated videos using real and historical people. Critics charge that it’s contributing to an era of misinformation and “AI slop” that is blurring the lines between what’s real and what’s fake.

    The product has also generated online discussion about ethics around the use of this technology. Some creators were using King’s likeness for inappropriate purposes. Users recently recreated the late actor Robin Williams in AI videos, prompting his daughter Zelda to call them “disturbing.”

    OpenAI said it “thanks Dr. Bernice A. King for reaching out on behalf of King, Inc., and John Hope Bryant and the AI Ethics Council for creating space for conversations like this.”

    The King Center didn’t immediately respond to CNN’s request for comment.

    [ad_2]

    Jordan Valinsky and CNN

    Source link

  • In Fight Against Fake Luxuries, Real Authentication Champions Human Expertise Over AI Technology

    [ad_1]

    One of the original and leading luxury goods authentication companies explains how and why trained experts are essential in helping shoppers ensure the value of their purchases and helping retailers and brands protect their reputations.

    As counterfeit Chanel, Burberry, Gucci and other luxury items flood online retailers and even slip into stores, shoppers are turning to professional authenticators to make sure their purchases are real – and many authenticators are turning to AI to handle the demand. But Real Authentication, the leading luxury goods authentication service, cautions that AI is no match for human expertise in detecting counterfeits.

    Anastacia Black and Jenna Padilla founded Real Authentication in 2016, making it one of the first independent luxury goods authentication companies. With experience in both fashion merchandising and luxury flipping, they identified a gap in the market and now serve shoppers, online retailers and brands in more than 90% of countries worldwide. The company authenticates over 170 premium brands, including Louis Vuitton, Christian Dior, Prada, and so many more.

    The counterfeit market for these and other premium brands is immense and growing. In 2023, officials in New York City seized more than $1 billion of counterfeit handbags, shoes and other luxury items in a single raid. In 2024, officials in the European Union confiscated 152 million counterfeit items valued at about $3.7 billion, 77 percent more items, with an increase of 68 percent in value compared to the year before.

    Shoppers, retail platforms and brands are responding to counterfeiting by utilizing authentication companies to verify the provenance of luxury items either before or after purchase. To deal with the volume of requests, many prominent authenticators now rely on AI-powered scans of photographs, with one company offering results in 60 seconds.

    The founders of Real Authentication say using AI alone is unrealistic, but more so, unreliable. Luxury brands release new or updated items and styles each season, including new textiles, hardware and typography. With that, AI simply lacks the knowledge needed for verification – especially as the quality of counterfeit items improves.

    “In a world where counterfeiting is becoming increasingly sophisticated, our human experts deliver the nuanced judgment and experience that AI simply cannot match,” says Co-founder Anastacia Black.

    Luxury goods submitted to Real Authentication are typically reviewed by two highly trained experts who apply their extensive hands-on experience in conjunction with the company’s proprietary archive of over 7 million reference images to analyze the fine details of every submission. The Real Authentication experts analyze every aspect of submitted items, from general product information all the way down to the denier, or thickness of threads, in a stitch.

    The company also reinforces its human authenticators’ work through leveraging its proprietary Smart Database Scan™ technology, which cross-checks numerous data points within its system and identifies potential red flags. The Real Authentication expert fraud detection team provides an additional layer of quality assurance.

    “Our dedication to authenticating luxury goods with precision is what sets us apart,” says Co-founder Jenna Padilla. “We believe in the power of human insight to protect consumers and the integrity of iconic brands.”

    Real Authentication now offers expert luxury designer authentication services for handbags, watches, streetwear, eyewear, clothing, jewelry, shoes, scarves, hats, and home goods such as pillows, glassware, and blankets, from more than 170 brands. The quality of the company’s services is documented in thousands of reviews and testimonials from luxury shoppers like entrepreneur and television star Bethenny Frankel.

    Real Authentication offers individual verifications, express service, a self-serve discount program, enterprise and pawn solutions, self-verifying certificates of authenticity, as well as white-label authentication services for brands and high-volume businesses. The company encourages shoppers and platforms to protect their investments and reputations through the use of authentication services.

    For more information, visit realauthentication.com.

    About Real Authentication

    Real Authentication is a virtual luxury goods authentication service. The Real Authentication service verifies the authenticity of new and used brand-name goods with the expertise of its team of world-renowned brand experts. Clients can utilize the mobile app to upload images and receive a determination within 24 hours or less. Customers can ensure their luxury goods are the real deal at realauthentication.com.

    Source: Real Authentication

    [ad_2]

    Source link

  • Nvidia’s forecast dampens AI enthusiasm in other tech stocks

    Nvidia’s forecast dampens AI enthusiasm in other tech stocks

    [ad_1]

    By Noel Randewich and Saqib Iqbal Ahmed

    (Reuters) -Nvidia dragged technology heavyweights lower after the chip maker’s earnings disappointed investors who had been hoping they would fuel fresh gains in Wall Street’s most valuable companies, and sent stocks in Asia down on Thursday.

    Nasdaq futures initially dropped about 1% following Nvidia’s quarterly earnings report late Wednesday, suggesting traders expected tech stocks to lose ground.

    Nvidia dropped almost 7% and lost $200 billion in stock market value after it forecast third-quarter gross margins that could miss market estimates and revenue that was largely in line. A handful of other AI-related companies shed around $100 billion in combined value.

    Shares of Broadcom and Advanced Micro Devices were each down about 2%. Microsoft and Amazon each dipped almost 1%.

    Weakness in tech stocks continued into Asian trade on Thursday. Nvidia’s chip contractor TSMC slid 2%, and declines in other tech names weighed on shares in Tokyo and Seoul, dragging Korea’s KOSPI to a two week low. [.T] [.KS]

    Nvidia’s Frankfurt-listed shares slightly pared back the after-hours move, falling 5%. Even if Wednesday’s late-day dip extends into Thursday, it would be well short of the 11% price swing the options market had priced for the shares, according to data from options analytics firm ORATS.

    Surging demand for its AI chips helped Nvidia crush consensus analyst estimates for several quarters, a trend that led investors to expect the company to exceed forecasts by higher and higher margins.

    Nvidia’s soft forecasts overshadowed a beat on second-quarter revenue and adjusted earnings as well as the unveiling of a $50 billion share buyback.

    “They beat but this was just one of those situations where expectations were so high. I don’t know that they could have had a good enough number for people to be happy,” said JJ Kinahan, CEO of IG North America and president of online broker Tastytrade.

    The lackluster response to Nvidia’s earnings report could help set the tone for market sentiment heading into what is historically a volatile time of the year. The S&P 500 has fallen in September by an average of 0.8% since World War Two, the worst performance of any month, according to CFRA data.

    Investors are also watching next week’s U.S. employment report for signs on whether the labor market weakness that roiled stocks in early August has dissipated.

    Optimism about AI technology, in part due to Nvidia’s explosive growth, has fueled gains on Wall Street over the past year.

    However, confidence in that rally has wavered in recent weeks following an earnings season that saw investors punish shares of tech companies whose results failed to justify rich valuations.

    Investors have also become concerned about increases in already hefty spending by Microsoft, Alphabet and other major players in the race to dominate emerging AI technology. Microsoft and Alphabet’s stocks remain down since their reports last month.

    Nvidia forecast revenue of $32.5 billion, plus or minus 2%, for its fiscal third quarter, compared with analysts’ average estimate of $31.8 billion, according to LSEG data. That revenue forecast implies 80% growth from the year-ago quarter.

    The Santa Clara, California-based company expects adjusted gross margin of 75%, plus or minus 50 basis points, in the third quarter. Analysts on average forecast gross margin to be 75.5%, according to LSEG data.

    Nvidia’s stock dropped 2.1% in Wednesday’s session, ahead of its report. It remains up about 150% so far in 2024, making it the biggest winner in Wall Street’s AI rally.

    Nvidia’s stock was valued at 36 times earnings ahead of its quarterly report, inexpensive compared to its average of 41 over the past five years. The S&P 500 is trading at 21 times expected earnings, compared to a five-year average of 18.

    (Reporting by Noel Randewich in San Francisco; Additional reporting by Saqib Ahmed in New York; Editing by Ira Iosebashvili, Lisa Shumaker and Mark Potter)

    [ad_2]

    Source link

  • Apple and Google Collaboration – Gemini AI to Boost iPhone’s Smart Functions

    Apple and Google Collaboration – Gemini AI to Boost iPhone’s Smart Functions

    [ad_1]

    March 18 (Reuters) – In a significant development, Apple (AAPL.O) is currently in negotiations to integrate Google’s advanced Gemini artificial intelligence platform into its iPhone offerings, according to sources reported by Bloomberg News on Monday. The discussions revolve around the licensing of Gemini to enhance certain upcoming features of the iPhone’s software later this year, though specifics on the agreement’s terms, branding, or the exact implementation have yet to be solidified.

    Market Reaction and Strategic Timing

    Following the news, Alphabet’s shares saw a substantial increase of over 6% in early trading in the United States, with Apple’s stock also rising by 2.5%. Any formal announcement of a deal is anticipated to be postponed until June, coinciding with Apple’s yearly developer conference.

    Apple has been in conversations with OpenAI, the creators of ChatGPT, about incorporating its model, highlighting Apple’s keen interest in bolstering its AI capabilities.

    Potential Impact of the Deal

    Immediate comments from Apple, Google (owned by Alphabet, GOOGL.O), and OpenAI were not available in response to Reuters’ inquiries. A collaboration between these tech giants could significantly extend Google’s AI services across Apple’s vast ecosystem, which boasts over 2 billion active devices.

    This move is seen as a strategic effort by Google to strengthen its position against Microsoft-backed OpenAI, while simultaneously addressing Apple’s challenges in rapidly deploying AI applications—a factor contributing to Apple’s recent 10% share price decline and its loss of the title as the world’s most valuable company.

    Regulatory Considerations and Future Plans

    However, this deal might attract increased attention from U.S. regulators, given Google’s previous legal challenges regarding its search engine dominance and the financial arrangements with Apple to maintain its position.

    Daniel Ives, an analyst at Wedbush, highlighted the significance of this partnership, stating, “This strategic partnership is a critical element in Apple’s AI strategy, uniting with Google to leverage Gemini for powering AI features Apple plans to introduce.” He further emphasized the advantage for Google, noting the access to Apple’s substantial user base and the considerable licensing fees involved.

    Google’s January collaboration with Samsung, Apple’s competitor, to implement its Gemini AI in the Galaxy S24 smartphone series was part of its broader strategy to enhance Gemini’s adoption following initial setbacks. Apple CEO Tim Cook recently indicated the company’s substantial investment in generative AI, with plans to unveil its applications later in the year.

    According to Bloomberg, while Apple aims to deploy its in-house AI models for certain new functionalities in the forthcoming iOS 18, it is also exploring partnerships to drive generative AI features, including image creation and essay writing based on simple inputs.

    [ad_2]

    Srdjan Ilic

    Source link

  • New AI Listens to Toilet Sounds to Detect Diarrhea

    New AI Listens to Toilet Sounds to Detect Diarrhea

    [ad_1]

    Dec. 27, 2022 — Artificial intelligence has achieved another milestone: Discerning the sound of an unhealthy bowel movement. 

    A design for a “Diarrhea Detector” that could alert health officials to disease outbreaks like cholera was recently presented by engineers from the Georgia Tech Research Institute. Someday, the AI could even be used with home smart devices to monitor one’s bowel health. 

    A prototype accurately identified diarrhea 98% of the time in tests, the engineers told a conference of the Acoustical Society of America in Nashville. Even with background noise, it was correct 96% of the time.

    Cholera infects millions of people each year, killing up to 143,000 who become dehydrated from severe diarrhea, according to the World Health Organization. Many deaths could be avoided with an oral rehydration solution if the outbreak is spotted fast enough. Cholera can be lethal within 24 hours after symptoms start. 

    The device could be installed in public toilets where inadequate plumbing raises the risk for a cholera outbreak.

    “Cholera typically has a more watery sound to it — it can sound a lot like urination and it doesn’t have a lot of the flatulence notes in general,” says project co-lead Maia Gatlin, an aerospace engineer and PhD candidate at the Georgia Tech Research Institute. “That someone is having severe diarrhea, and that they are having a lot of it — that can be captured.” 

    The idea grew out of conversations about how COVID-19 can be monitored by analyzing sewage, says project co-lead Alexis Noel, PhD, a biomechanics engineering researcher at the institute. 

    Other researchers have considered video analysis to look for diarrhea. 

    “I was curious if we could detect diarrhea using sound,” Noel says, “as some folks are a little wary about having a camera pointed at their bum in the toilet.”

    First, the researchers gathered 350 publicly available audio samples of bathroom sounds from YouTube and Soundsnap. Some clips had up to 10 hours of diarrhea noises.

    The researchers listened to the samples to establish authenticity. 

    “We didn’t know these people, we didn’t know how they recorded, so we had to listen to a good bit,” Gatlin says. “There were definitely lots of fart sounds where we were like, ‘That’s not a fart, that’s someone blowing into their elbow.’”

    The sounds of defecation, urination, flatulence, and diarrhea were converted into spectrogram images. A computer analyzed those images for about 10 hours using a “convolutional neural network.” The software, using trial and error, teaches itself how to identify the subtle similarities between diarrhea spectrograms and how they differ from other toilet sounds.

    For example, urination has a consistent tone and defecation may have a singular tone. Diarrhea’s sound is more random.

    Once the AI learning process was complete, the researchers loaded the diarrhea-decoding algorithm onto a Raspberry Pi, a computer roughly the size of a credit card that costs less than $50. Georgia Tech student Cade Tyler 3D-printed a case for the motherboard with a microphone connection, a series of lights (green for acquiring a signal, red for diarrhea, and orange for “other”), and the words “Diarrhea Detector” inscribed on the surface. 

    The computer takes a 10-second audio recording, which is converted to a spectrogram and fed to the algorithm. The whole process takes only seconds.

    The next iteration of the device would send a report via Wi-Fi or other wireless communication signal to a database, so public health officials can monitor for disease outbreaks. 

    “We’re not collecting anything identifiable about people,” Gatlin says.

    The researchers have not yet determined how many of these devices would be needed to cover a community, or where the ideal placement would be. 

    The algorithm still needs to be refined using better audio data collected in controlled conditions, from people who have provided informed consent, Gatlin says. Gatlin also hopes to train the AI to work in outdoor latrines, which are common in areas without functioning sewage systems. 

    [ad_2]

    Source link