ReportWire

Tag: machine learning

  • Capital One invests in ML during Q1 | Bank Automation News

    Capital One invests in ML during Q1 | Bank Automation News

    [ad_1]

    Capital One looked to technology to help navigate economic uncertainty as it invested in machine learning during the first quarter.  The $471 billion bank saw a 3% year-over-year increase in communications and data processing to $350 million as the bank used machine learning (ML) to assist in making business decisions based on market sentiment, Capital […]

    [ad_2]

    Brian Stone

    Source link

  • JPMorgan Chase looks to quantum tech for deep hedging | Bank Automation News

    JPMorgan Chase looks to quantum tech for deep hedging | Bank Automation News

    [ad_1]

    JPMorgan Chase is investing in quantum computing technologies research to discover its potential uses for deep hedging within financial services. Deep hedging can be used to efficiently learn the expectations and distribution of returns, offer improved performance and train quantum policies. The $3.6 trillion bank conducted a study last month to determine if deep hedging […]

    [ad_2]

    Whitney McDonald

    Source link

  • How AI Is Shaping the Cybersecurity Landscape | Entrepreneur

    How AI Is Shaping the Cybersecurity Landscape | Entrepreneur

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    As a CTO with over one and a half decades of expertise in the ever-changing field of cybersecurity, I have been observing the immense impact that artificial intelligence (AI) has had on the wide technological landscape. Also, I have witnessed how AI-based solutions have emerged as a crucial aspect of enhancing processes in various fields and disciplines over the years. And the cybersecurity field is no exception.

    The ability of AI-based machine learning (ML) models to identify patterns and make data-driven decisions and inferences present a highly innovative approach to quickly identifying malware, directing incident response and even predicting potential breaches before they occur.

    Given the significant potential of AI in the field of cybersecurity, this article explores how AI fits into the broader cybersecurity landscape and how it can be effectively leveraged to enhance the security of businesses and their users, along with some of its limitations.

    Related: AI For Cybersecurity: Maximizing Strengths And Limiting Vulnerabilities

    Exploring the intersection of Artificial Intelligence and cybersecurity

    In the modern era of digitization, data is exponentially generated, and a larger amount of metadata is either saved or received online, whether directly or indirectly. Moreover, for the data to attain its intended location or be utilized for specific purposes, it is frequently crucial to transmit it across a network or store it in a specific database or server.

    Here is where cybersecurity practices are implemented to ascertain the ultimate safeguarding of data transmission, storage and access — which is a crucial aspect of the battle against cyberattacks.

    As the technological landscape advances, cybercriminals tend to execute a diverse array of illicit activities, leading to substantial disruption in the online community. However, businesses can harness the power of AI and cybersecurity to mitigate risks and enhance security by detecting fraudulent activities and cyberattacks.

    Having said that, AI serves as a crucial factor in machine-based decision-making. For example, a sophisticated AI system could detect dubious actions on the network and impede access until the requisite authorization is provided. These AI techniques are predicated on machine learning algorithms, empowering programmers to train algorithms using data collected over an extended duration.

    The AI algorithm is designed in such a way that it can recognize and differentiate between legitimate access and fraudulent access. Accordingly, it improves a business’s security by making attacks and irregularities more predictable.

    Furthermore, AI technologies have a computational and analytical speed that surpasses human efforts and can determine abnormalities far more quickly than present techniques. As a result, AI and ML techniques can together help businesses defend against cyberattacks that could cost them millions of dollars.

    Related: How Companies Can Utilize AI and Quantum Technologies to Improve Cybersecurity

    How to leverage AI in the cybersecurity landscape

    As previously discussed, AI has many advantages and applications in various fields, including cybersecurity. Given the rapidly evolving nature of cyberattacks and the development of sophisticated attacking mediums, AI can benefit businesses by staying up-to-date in terms of security.

    AI can improve threat detection through automation and provide a more effective response compared to traditional security systems and manual techniques. This helps businesses optimize their cybersecurity measures and stay ahead of potential threats. Here are some key benefits of leveraging AI in the cyber security landscape.

    Threat detection:

    Businesses can tremendously benefit from AI-based cybersecurity practices in identifying cyber threats and disruptive activities by cyber criminals. In fact, the proliferation of new malware is happening at an alarming rate, making it extremely challenging for traditional software systems to keep up with the evolving threat landscape.

    AI algorithms, however, discover patterns, recognize malware and find any unauthorized activities done before they impact a system. This makes AI a valuable tool for protecting against cybercrime and maintaining the security of business operations.

    In fact, AI and ML-based cybersecurity solutions can significantly shorten the time required for threat identification and incident response, and they can immediately notify the business of unusual behavior.

    Bot defense:

    Another realm where AI is employed to combat digital threats is the defense against bots. In today’s virtual landscape, a considerable volume of web traffic is generated by bots, some of which pose potential security hazards. Bots, also identified as automatic scripts or software, are used by cybercriminals to initiate attacks on websites, networks and systems.

    Furthermore, bots can be utilized for a variety of malicious activities, such as Distributed Denial of Service (DDoS) attacks, takeovers of accounts and the scraping of sensitive information.

    AI-based solutions can be used to detect and block bot traffic by analyzing the patterns and behaviors of the incoming traffic. Machine learning algorithms can be trained to identify and flag suspicious activity, such as high volumes of artificial traffic coming from bot networks or abnormal requests.

    With AI, businesses can effectively discover the answers to questions like “what seems like a normal user journey?” and “what would be a potentially harmful atypical experience?” by looking at data-based behavioral patterns.

    Phishing detection:

    AI can greatly benefit the cybersecurity landscape by detecting sophisticated phishing attempts. AI-based machine learning models can analyze and classify incoming emails and messages to identify whether they are legitimate or fraudulent.

    By leveraging natural language processing techniques, AI can scan for keywords, phrases and other indicators that are commonly associated with phishing attacks. This lowers the possibility of a successful phishing attack by enabling security teams to swiftly identify and address possible risks.

    Moreover, AI algorithms can detect and flag suspicious URLs and domains. Phishing attackers often use deceptive URLs to trick users into revealing sensitive information. AI-based cybersecurity systems can analyze URLs and domain names to identify whether they are genuine or fake. These systems can then block access to malicious websites or display warning messages to users before they interact with the site.

    Related: The Rise of Artificial Intelligence in Cyber Defense

    Limitations of AI in cybersecurity

    AI systems, despite their ever-increasing sophistication, remain beholden to limited knowledge. These systems can only function with the aid of their trained data sets, thus making them potentially impotent in the face of novel or intricate threats that lie beyond their recognized realm. Furthermore, such limitations render them susceptible to both false negatives and false positives, thus facilitating both instances of unidentified threats and unnecessary alarms.

    Another crucial risk confronting AI systems is the presence of inherent biases and resultant discrimination. Such biases can emerge as a consequence of unbalanced data sets or faulty algorithms, thus engendering either unfair or inaccurate assessments, potentially leading to serious consequences.

    Finally, there exists the formidable threat of an over-reliance on AI systems, which can lead to risky complacency and, eventually, a false sense of safety. This could then lead to a regrettable lack of focus on other crucial aspects of cybersecurity, such as user education, the enforcement of policies and regular system updates and patches.

    The application of AI in detecting and combating cybercrime is undoubtedly a game-changer, bringing new and improved levels of efficacy to the cybersecurity domain. Also, it goes without saying that incorporating human intelligence along with AI can overcome any possible limitations posed by AI systems.

    There’s presently an extensive acceptance that AI plays an important part in data security, and this acceptance is anticipated to expand over the upcoming years as businesses realize its advantages. In fact, the commercial appraisal of AI in data security is expected to attain $66.22 billion by 2029, growing at a CAGR of 24.2% between 2020 and 2027.

    In order to stay ahead of cyber threats, businesses ought to invest in developing and implementing novel AI-based cybersecurity solutions. The continued breakthrough of AI expertise will doubtlessly influence the prospect of data security, and businesses that leverage AI effectively will be best postured to safeguard themselves against cyber threats and provide exceptional user satisfaction, thereby sustaining a competitive advantage.

    [ad_2]

    Deepak Gupta

    Source link

  • Education Nonprofits Release Free Tool to Detect ChatGPT-Generated Student Work

    Education Nonprofits Release Free Tool to Detect ChatGPT-Generated Student Work

    [ad_1]

    Quill.org and CommonLit.org launched AIWritingCheck.org, a free tool that allows educators to determine whether a text passage was created by humans or AI.

    Press Release


    Jan 25, 2023 12:30 EST

    Education technology nonprofits Quill.org and CommonLit.org have launched AIWritingCheck.org to help teachers determine whether writing was human- or AI-generated text. At www.aiwritingcheck.org, teachers may enter a passage of text and, with the click of a button, learn whether the text was likely generated by a student or a computer.

    ChatGPT’s launch has prompted discussion about how to best equip teachers and students with tools to preserve academic integrity and protect the critically important skill of learning how to write. Quill and CommonLit built this new tool to be free, scalable, and user-friendly. AIWritingCheck.org requires no account or subscription and can process up to 100,000 essays per day, with an accuracy rate of 80-90%. 

    View & Download the Demo Video: https://www.loom.com/share/8bc43ec4dd9a40b3b3cdd78c92394668

    Alongside the launch of AI Writing Check, the nonprofits developed a toolkit to help educators utilize AI detection websites responsibly. The Quill and CommonLit teams are committed to supporting teachers in navigating the changing landscape and fast developments in AI, acting as translators among the tech, edtech, and K-12 communities. 

    View the toolkit: https://bit.ly/ai-check-toolkit

    Peter Gault, Quill.org’s Founder and Executive Director, said, “As tools like ChatGPT become ubiquitous and more advanced over time, many fear that millions of students will stop engaging in the critically important intellectual exercise of carefully reading a text, building a response, applying the rules of grammar, and revising their writing with feedback. While Quill is built on top of AI, we believe that AI should be used to encourage students to do more writing, not for the AI to write for the students.”

    Michelle Brown, CommonLit’s Founder and Chief Executive Officer, said, “The shortcut of using ChatGPT to do the thinking for you is not one that children will so easily overcome. In K-12, it’s the exercise of writing and the thinking that goes into organizing your thoughts that matters – not just the output. Education isn’t just about creating economic value; it’s about human development. It’s about our kids, and building their skills and confidence to become leaders who can communicate and leverage advanced tools.”

    Quill.org and CommonLit.org collectively serve more than 10 million economically disadvantaged students each year with free educational materials to advance literacy, representing 20% of all K-12 students. Quill.org’s mission is to help every low-income student in the United States become a strong writer and critical thinker through free online tools that help teachers by using artificial intelligence to automatically grade and provide feedback on student writing. CommonLit’s nonprofit mission is to unlock the potential of every child through reading, writing, speaking, listening, problem-solving, and collaboration.

    Source: Quill

    [ad_2]

    Source link

  • How to Use AI Tools Like ChatGPT in Your Business

    How to Use AI Tools Like ChatGPT in Your Business

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    Artificial intelligence is not only altering the course of the internet but also impacting the future of business. While some fear that it will have harmful economic repercussions by replacing people in jobs, AI can also serve as a game-changing tool to grow a business and increase its efficiency — help with everything from lead generation to content creation.

    Here are a few popular new platforms and how you can apply them:

    ChatGPT

    Launched by OpenAI in November of 2022, this chatbot amassed more than a million users in just five days. A generative dialogue AI application, it can create new content, and its potential uses are virtually endless — from writing full essays to blog posts, song lyrics to cover letters and resumes. It can even draft legal contracts using local statutes/regulations pulled from public sources. For coding purposes, it can write and explain code, find errors in existing code and build websites. A master chatbot characterized by stunning accuracy, its greatest asset is an ability to mirror organic, lifelike conversations.

    Because of such far-reaching capabilities, ChatGPT can be used in virtually any industry and for businesses of any size. Here are just a few potential uses:

    Content Creation: In an online marketplace in which content is king, ChatGPT (currently free to use for the public) can write blog posts or social media posts using specific parameters that fit your needs, and also help generate new content concepts. You can ask it to formulate a list of ideas for podcasts or videos, then prepare an entire script pursuing these ideas. If you don’t like the immediate results, ask it to rewrite in a specific tone and/or either simplify or make copy more detailed and complex. It can also edit pre-written content to ensure that results are error-free and adhere to set tones and guidelines.

    Customer Service: ChatGPT can serve as a solution to the often notably ineffectual results of live chat queries on websites — to handle customer inquiries and provide answers to common questions, leaving more time for a customer service team to deal with more complicated or demanding issues. Because ChatGPT can communicate with customers using more natural language, it improves their website experience — but it’s important to note that it applies a general language model and might lack specialized knowledge in specific areas.

    Related: 3 Brands Crushing Instant Customer Service

    • Hiring and Recruitment: Reading through cover letters and resumes can be an exhaustive part of growing a team, but ChatGPT users can simply copy and paste cover letters and ask it to search for key job criteria. It will comb through text to determine if candidates have relevant experience, possibly avoiding the need to hire outside recruiters, and certainly saving time.

    Lensa AI

    This is an AI image tool in the form of an editing app — one that’s been selling robustly in recent weeks. You may have seen it overtaking social media feeds with colorful art renditions of friends’ selfies. While it performs all the basic functions of a standard photo editing app (removing objects from photos, retouching blemishes, blurring backgrounds, adding filters, etc.), Lensa has differentiated itself by incorporating AI art generation. It’s free for basic use, but charges to transform selfies and portraits into AI imagery with a wider array of colors and styles, using compiled online art. These AI-generated avatars range from beautiful to bizarre, though preserve user facial features and other recognizable photo elements.

    Lensa is a low-cost method of elevating visual content. When posting photos of products or services, its tools can edit at an often professional level, and additional tools can be used to create unique renditions of photos, helping small businesses generate original content in a matter of minutes. It’s akin to having an art team at your fingertips.

    Related: What is Lensa AI? And Does it Pose Privacy and Ethical Concerns?

    Jasper

    An intuitive writing tool that can likewise be used for content creation — but without the cost or downtime of hiring outside help — Jasper, after a few words of instruction, can generate email marketing copy, blog posts, social media posts or product descriptions. You can request it to draft sales or marketing copy using certain tones (such as “informative” or “casual”), to which you can then add your voice and flair.

    It’s even possible to drop examples of your writing into Jasper: On-board AI will then analyze your voice and style of writing and emulate it for future content generation. This can be especially helpful if customers know you personally, and/or if a genuine voice to keep the feel of a personal connection with consumers is important.

    There are a few factors to consider, however:

    Learning Curve: Jasper has a steep one, but is powerful and effective once you’ve acquainted yourself with software navigation.

    Content Accuracy: When drafting content for specific industries, Jasper can miss technical jargon and industry-specific vocabulary. Because of this, the resulting content may require additional editing.

    Cost vs. Usage: Some users may need to weigh use frequency with expense, as Jasper AI plans start at $24/month and increase depending on use.

    That said, this application can clear hurdles for business owners in many fields, and of just about any scale. It can level the playing field for startups and new businesses working to establish a steady consumer base by sidestepping the steep costs of traditional content generation. Early adopters can see particularly notable benefits, so now is the time to acquaint yourself.

    Related: Why Email Marketing Is Better for Your Business Than Social Media

    [ad_2]

    Scot Chrisman

    Source link

  • 5 Ways Machine Learning Will Impact the Entrepreneurial Landscape In 2023

    5 Ways Machine Learning Will Impact the Entrepreneurial Landscape In 2023

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    Machine learning is much more than a buzzword — it has become a major player for many businesses. More and more companies are implementing machine learning and other AI tools to supplement or streamline their activities. This is especially true after the Covid-19 pandemic accelerated the adoption of machine learning.

    The way that your company implements machine learning can have a direct impact on its performance in the year ahead, especially as AI tools become utilized in a broader range of business activities. By understanding the areas where machine learning is poised to have the greatest impact, you can move proactively to adopt these tools for your own entrepreneurial efforts.

    Related: Learn How Machine Learning Can Help Your Business

    Decision-making automation:

    Machine learning’s ability to proficiently analyze and interpret large amounts of data in a rapid timeframe has made it an essential part of many businesses’ decision-making processes. In some cases, these tools can even be used to automate simpler, lower-level decisions that might otherwise be made by customer service reps or others.

    In this situation, machine learning draws data from previous actions and trends, and uses available data to recommend the most efficient solution to a problem or request. This allows employees at all levels to spend less time focused on more repetitive decision-making tasks so they can focus their efforts on more in-depth problems.

    This is undoubtedly part of why 81% of employees feel AI improves their work performance, with 49% specifically citing improved decision-making.

    1. Improved privacy compliance

    While many consumers have concerns about big data and machine learning negatively affecting their privacy, machine learning is often being used to enhance privacy compliance and protect data.

    In a recent article for the Turkish Journal of Computer and Mathematics Education, Pramod Misra details multiple ways machine learning can aid privacy compliance, namely through machine learning privacy meters, which assess potential privacy issues associated with other machine learning models; and privacy-preserving machine learning (PPML), which trains machine learning tools to protect confidential data.

    With these tools, Misra’s research team was able to use PPML to model threats and prevent data leaks from a variety of attack methods. In this case, machine learning is being used to ensure the security of other enterprise applications.

    Related: What Is Machine Learning, and How Can It Help With Content Marketing?

    2. Smarter customer recommendations

    One of the more popular uses for machine learning has been in customer recommendation engines. Examples of these tools include Amazon recommending additional items for a shopper to add to their cart based on past purchases, as well as Netflix’s personalized recommendations based on a customer’s viewing history and other factors.

    The end goal of machine learning, in this case, is to deliver a more streamlined and enjoyable experience for the customer, based on the data that they readily supply to the business. Notably, many of these machine learning tools also support direct feedback from customers to improve their recommendations.

    Though these data filtering tools are hardly new, they can still have a transformative impact on entrepreneurs in 2023. Businesses that can implement specific and relevant use cases for delivering personalized recommendations to their customers will be better positioned to deliver a positive experience that helps them stand out against the competition.

    3. Generative AI

    In the latter half of 2022, generative AI proved to be one of the hottest topics in the machine learning space, garnering both enthusiasm and harsh criticism. Generative AI has been used to create highly realistic photos and videos, as well as generate “art” or even produce basic written content.

    Many artists and celebrities have spoken out against AI art, in large part because of how it uses others’ creations as source material to generate its own content. Despite the outcry, many businesses will likely make their own tentative forays into generative AI to speed up the creation of their own content and to reduce costs.

    Though this trend is certainly worth paying attention to, this is an area where entrepreneurs should proceed with caution. Generative AI is still prone to imperfections, and the backlash of using it could easily outweigh the potential benefits. Time will tell how this trend shapes the business and artistic landscape (for good or ill) in the year ahead.

    4. More efficient financial management

    Few things can have a greater impact on a business’s sustainability than its cash flow and overall financial management. Machine learning algorithms are playing an increasingly vital role in a wide variety of financial tasks to help leaders make better money-related decisions.

    For example, machine learning can be used for tasks like performing a cost analysis or predicting expenses associated with certain business activities. This allows leaders to better determine how an action will affect the bottom line, and if the investment will truly be “worth it.”

    Machine learning tools can also be used to protect businesses and customers from fraud. Fraud detection tools can use information such as the time and location where a customer typically uses their credit card to flag fraudulent purchases. Protecting customers is a sure way to enhance trust and build a loyal customer base.

    Are you prepared for how machine learning will impact you?

    Machine learning has already had a significant influence on a wide range of business activities — and that is only going to accelerate even more in 2023. Whether your business has already adopted AI tools or is just looking into machine learning, focusing on these tech tools can go a long way in driving better efficiency, productivity and profitability.

    [ad_2]

    Lucas Miller

    Source link

  • Mosaic Data Science Combats Climate Change & Accelerates ESG Efforts With Custom Artificial Intelligence & Machine Learning Solutions

    Mosaic Data Science Combats Climate Change & Accelerates ESG Efforts With Custom Artificial Intelligence & Machine Learning Solutions

    [ad_1]

    Mosaic Recently Contributed AI/ML Services to a Custom Application that Alerts on Carbon Emissions and Recommends Renewable Energy Portfolios. The company is also working with a leading risk management software firm to accelerate corporate ESG adoption.

    Press Release


    Jan 9, 2023 13:15 EST

    Mosaic Data Science contributed machine learning algorithm development & deployment services to help a leading power firm automate the process of quantifying the switch to renewable energy portfolios from traditional energy sources while exploring the costs and tradeoffs of said offerings for their business-to-business customers. The solution is designed for enterprises that require power to a diverse set of business functions, such as industrial warehouses, production plants, and related physical infrastructure. 

    The application relies on a highly scalable, custom mathematical optimization algorithm to select the products to eliminate or offset the emissions required to reach the GHG targets. Mosaic’s data scientists collaborated with key stakeholders to lay out requirements for an interactive dashboard and the algorithms driving the portfolio recommendations. 

    In the past, this had been a manual, error-prone, and time-consuming effort as sales personnel had to piece together a portfolio to cover energy usage across tens of thousands of service locations for a customer over a multi-decade window. Automating the process is a massive win for the energy company and its customers.

    As the world becomes increasingly exposed to climate change impacts, more companies have stepped up their efforts to provide environmental, social, and governance reports (ESG) with emissions reduction goals. The project is just one example of the many use cases of data science techniques in solving carbon footprint reduction problems and combating climate change, contributing to a healthier future for our planet.

    Mosaic also works with a leading risk management software company to accelerate ESG adoption among global corporations. Mosaic is designing ML-based solutions to help corporations make more sustainable decisions. 

    According to Gartner, artificial intelligence was named one of the top technologies by CEOs to help accelerate sustainable business progress and could help deliver nearly one-third of the carbon emission reductions required by 2030. 

    “Mosaic’s artificial intelligence and machine learning skills can help the organizations focus on sustainable processes & practices,” said Drew Clancy, VP of Marketing and Sales. “Too often people generalize AI as trying to sell you more products, but this technology should play a critical role in increasing our resilience to the effects of climate change by helping us identify risk factors and develop plans to mitigate them.”

    Companies that put AI at their core are far more likely to contribute positively to climate resilience, adaptation, and mitigation efforts than those that do not. Mosaic continues to be a champion of sustainability in its business practices. 

    About Mosaic Data Science

    Mosaic Data Science is a leading AI/ML services company focused on helping organizations build and deploy custom solutions. The company makes complex artificial intelligence and machine learning solutions actionable, explainable, and usable to any organization.

    Source: Mosaic Data Science

    [ad_2]

    Source link

  • AI in Health Care: No, the Robots Are Not Taking Over

    AI in Health Care: No, the Robots Are Not Taking Over

    [ad_1]

    Dec. 1, 2022 – It’s common for many people to fear the unknown, and exactly how artificial intelligence might transform the health care and medical experience is no exception. 

    People might be afraid, for example, that AI will remove all human interaction from health care in the future. Not true, say the experts. Doctors and other health care workers might fear the technology will replace their clinical judgment and experience. Also not true, experts say. 

    The AI robots are not taking over. 

    AI and machine learning remain technologies that add to human know-how. For example, AI can help track a patient over time better than a health care professional relying on memory alone, can speed up image analysis, and is very good at prediction.

    But AI will never replace human intuition in medicine, experts say.

    “AI is unemotional. It’s fast and very, very smart, but it does not have intuition,” says Naheed Kurji, board chair of the Alliance for Artificial Intelligence in Healthcare and CEO of Cyclica Inc. 

    Machine learning, a form of artificial intelligence where a computer learns over time as it gets more and more data, could sound threatening to a person who might not fully understand the technology. That’s why education and greater awareness are essential to ease any concerns about this growing technology. 

    “You need to have an understanding of human behavior and how to help people overcome their inherent fears of something new,” Kurji says. 

    All this new science needs to be explained to the public, and machine learning is certainly one that deserves explanation,” says Angeli Moeller, PhD, head of data and integrations generating insights at Roche in Berlin, and board vice chair for the Alliance for Artificial Intelligence in Healthcare. 

    “It’s useful to ground it in examples that the general population is familiar with and with technology that has grown,” she says. “On our smartphones, we benefit from a significant amount of machine learning – even if you just look at your Google search or your satellite navigation system.”

    Moeller says it’s helpful to think of AI as an assistant to a doctor, nurse, a caregiver, or even a patient trying to understand more about a medical diagnosis, treatment plan, or prognosis. 

    Also, with big data comes big responsibility. “Health care industry accountability is important,” she says. 

    With than in mind, the Alliance for Artificial Intelligence in Healthcare was created in 2019 as a forum for industry players – drug companies, biotechnology firms, and database entities – to convene and address important AI questions. The group seeks to answer some fundamental questions, including: How do we ensure that we have ethical and appropriate use of artificial intelligence in health care? How do we make sure that that innovation gets to the patient as quickly as possible? 

    “If you think about your personal life, a decade ago, your car didn’t have autopilot modes where it drove itself,” says Sastry Chilukuri, co-CEO of Medidata and founder and president of Acorn AI. “You didn’t really have an iPhone – which is like a computer in your hand – much less like have an Apple Watch – which is like another minicomputer on your wrist pumping out all kinds of data.”

    “Our world has dramatically changed over just like the last 15 years,” he says. “It’s very interesting, I think. It’s a good time to be alive.”

    [ad_2]

    Source link

  • AI ‘Simulants’ Could Save Time and Money on New Medications

    AI ‘Simulants’ Could Save Time and Money on New Medications

    [ad_1]

    Nov. 30, 2022 – Artificial intelligence is poised to make clinical trials and drug development faster, cheaper, and more efficient. Part of this strategy is creating “synthetic control arms” that use data to create “simulants,” or computer-generated “patients” in a trial. 

    This way, researchers can enroll fewer real people and recruit enough participants in half the time. 

    Both patients and drug companies stand to gain, experts say. An advantage for people, for example, is simulants get the standard-of-care or placebo treatment, meaning all people in the study end up getting the experimental treatment. For drug companies unsure of which of their drug candidates hold the most promise, AI and machine learning can narrow down the prospects. 

    “So far, machine learning has primarily been effective at optimizing efficiency – not getting a better drug but rather optimizing the efficiency of screening. AI uses the learnings from the past to make drug discovery more effective and more efficient,” says Angeli Moeller, PhD, head of data and integrations generating insights at drugmaker Roche in Berlin, and vice chair of the Alliance for Artificial Intelligence in Healthcare board. 

    “I’ll give you an example. You might have a thousand small molecules and you want to see which one of them is going to bind to a receptor that’s involved in a disease. With AI, you don’t have to screen thousands of candidates. Maybe you can screen just one hundred,” she says.

    ‘Synthetic’ Trial Participants

    The first clinical trials to use data-created matches for patients – instead of control patients matched for age, sex or other traits – have already started. For example, Imunon Inc., a biotechnology company that develops next-generation chemotherapy and immunotherapy, used a synthetic control arm in its phase 1B trial of an agent added to pre-surgical chemotherapy for ovarian cancer.

    This early study showed researchers it would be worthwhile to continue evaluating the new agent in a phase 2 trial. 

    Using a synthetic control arm is “extremely cool,” says Sastry Chilukuri, co-CEO of Medidata, the company that supplied the data for the Phase 1B trial, and founder and president of Acorn AI.

    “What we have is the first FDA and EMA approval of a synthetic control arm where you’re replacing the entire control arm by using synthetic control patients, and these are patients that you pull out of historic clinical trial data,” he says.

    A Wave of AI-Boosted Research?

    The role of AI in research is expected to grow. To date, most AI-driven drug discovery research has focused on neurology and oncology. The start in these specialties is “probably due to the high unmet medical need and many well-characterized targets,” notes a March 2022 news and analysis piece in the journal Nature. 

    It speculated that this use of AI is just the start of “a coming wave.”

     “There is an increasing interest in the utilization of synthetic control methods [that is, using external data to create controls],” according to a review article in Nature Medicine in September.  

    It said the FDA already approved a medication in 2017 for a form of a rare pediatric neurologic disorder, Batten disease, based on a study with historical control “participants.”

    One example in oncology where a synthetic control arm could make a difference is glioblastoma research, Chilukuri says. This brain cancer is extremely difficult to treat, and patients typically drop out of trials because they want the experimental treatment and don’t want to remain in the standard-of-care control group, he says. Also, “just given the life expectancy, it’s very difficult to complete a trial.” 

    Using a synthetic control arm could speed up research and improve the chances of completing a glioblastoma study, Chilukuri says. “And the patients actually get the experimental treatment.”

    Still Early Days

    AI also could help limit “non-responders” in research.

    Clinical trials “are really difficult, they’re time-consuming, and they’re extremely expensive,” says Naheed Kurji, chair of the Alliance for Artificial Intelligence in Healthcare board, and president and CEO of Cyclica Inc, a data-driven drug discovery company based in Toronto. 

    “Companies are working very hard at finding more efficient ways to bring AI to clinical trials so they get outcomes faster at a lower cost but also higher quality.”

    There are a lot of clinical trials that fail, not because the molecule is not effective … but because the patients that were enrolled in a trial include a lot of non-responders. They just cancel out the responder data,” says Kurji. 

    “You’ve heard a lot of people talk about how we are going to make more progress in the next decade than we did in the last century,” Chilukuri says. “And that’s simply because of this availability of high-resolution data that allows you to understand what’s happening at an individual level.”

    “That is going to create this explosion in precision medicine,” he predicts.

    In some ways, it’s still early days for AI in clinical research. Kurji says, “There’s a lot of work to be done, but I think you can point to many examples and many companies that have made some really big strides.”

     

     

    [ad_2]

    Source link

  • Scaling with purpose: 4 ways to future-proof banking | Bank Automation News

    Scaling with purpose: 4 ways to future-proof banking | Bank Automation News

    [ad_1]

    The importance of customer experience has increased exponentially over the past few years as people bring more aspects of their lives online. This year, more than 65% of Americans are using digital banking as their preferred banking method, according to a May 2022 survey published by Statista. So, what does this mean? Financial institutions must adapt and follow suit by prioritizing a digital customer experience in order to thrive.

    Juan Vela, global head of market strategy, Cisco Meraki

    With an accelerating shift to a digitized world, customers are increasingly foregoing the traditional bank branches and are instead conducting transactions, depositing checks, opening accounts and more online. There are even some banks that provide an online-only experience, eliminating physical branches entirely.

    As the popularity of digital banking rises, financial institutions must consider how they can stand out in a crowded market to not only attract new customers, but also retain old ones with an experience-led approach.

    To maintain their competitive edge, banks must prioritize a tech-driven experience for their customers. By implementing enhanced connectivity, security and intelligence across their infrastructure, financial institutions will be able to future-proof their business and improve the customer experience.

    1. Cloud-first approach for unified, connected experiences

    For the financial services industry, digital transformation calls for end-to-end augmentation of processes, business practices and methodologies for financial service delivery. In fact, some may say it’s essential for financial institutions to take a cloud-first approach to unify the physical and digital worlds. This is due to the fact that greater visibility can be achieved into all aspects of a network, not to mention the physical aspects of a business when IoT and cameras are introduced, providing valuable business insights into customer behaviors.

    With those insights, a cloud-first approach then helps businesses iterate faster on new customer experiences and quickly pivot as the behaviors of customers change over time. It also becomes easier to rapidly implement updates to address newly detected cybersecurity threats while prioritizing and securing application experiences, as more and more customers transition to a purely digital banking experience.

    One important strength of a cloud-first approach is the ability to scale a business in near real-time to meet customer needs as they happen. Whether it’s adding new branches, features or applications, a cloud network can implement these in minutes without disrupting other operations on the network. Because of this, cloud migration has become a priority.

    2. Enhance experiences with machine learning

    Customers have a near infinite choice of banking options and expect a secure digital experience every time they make a transaction; they need it to be executed quickly and completed with greater accuracy than ever before. Machine learning has the ability to see how a network is behaving and transform that information into insights and recommendations to make a network run at its best, so customers get the most reliable and consistent experience.

    For a financial institution, it takes the guesswork out of optimizing a network to create the most efficient network possible. This not only saves money by making the best use of resources available, but also provides the insights needed to better plan for the future. In many cases, machine learning can be automated for the network to make the recommended changes itself.

    Automation can be taken one step further by leveraging APIs to automate many of the manual tasks within a network such as deploying new locations and features, or to gain specialized information regarding how customers use certain banking assets such as ATMs. The point is to provide staff with the ability to accomplish more in less time while gaining the information needed to make intelligent decisions about future network needs.

    3. The internet of things powers branch transformation

    While many financial institutions may already implement technology-driven aspects into the in-person banking experience, banks on the laggard side of the digital divide are losing customers and managed assets. This has resulted in a tremendous push to bring digital banking to life inside the branch to accommodate evolved banking expectations.

    Banks are leveraging Wi-Fi connectivity and the internet of things (IoT) to enhance in-person customer experiences. Upon walking in and signing into the check-in kiosk, customers are transported to a customized app-like experience in the branch.

    Bank managers are utilizing heatmaps and people-counting capabilities within cloud-based smart cameras to optimize staffing and reduce queue wait times. Smart cameras outside can optimize the drive-thru experience for customers, keeping track of the number of cars and wait times, and alerting banks when additional staffing is required to speed service and improve the customer experience. Behind the scenes, environmental sensors are monitoring and protecting the critical IT infrastructure powering these outcomes. As physical security is also automatically monitored by the aforementioned cloud-based smart cameras, the bank has become a welcoming and safe environment.

    4. SD-WAN network protection

    With cybersecurity attacks on the rise, financial institutions are allocating upwards of 10% of IT spend in order to deliver best-in-class security for their stakeholders and customers alike, according to Deloitte. According to the U.S. Federal Reserve, cybersecurity events are one of the top risks to financial stability. As financial institutions are entrusted with sensitive customer information, and the quantified costs of security incidents is high and growing, endpoint and network security becomes even more important.

    Endpoint and network security are poised to become the largest components of cybersecurity spend in the industry, having grown in share over the last several years. As such, firms need a converged security and SD-WAN approach that can scale security, performance and resiliency across regions, devices and technologies in the simplest manner—one that leverages the power of the cloud.

    A cloud-managed SD-WAN architecture keeps customer and institutional data secure across networks. Cloud-managed SD-WAN also facilitates the commensurate data flow and communication that enables financial services organizations to serve their customers’ rapidly evolving needs. With networks touching more nodes than ever before, it becomes paramount to leverage the cloud in order to manage devices, flows and policies from a common decision-making platform.

    Cloud-managed SD-WAN architecture also adds context-specific visibility into operations, employee locations and data flows that help IT leaders act on new insights while continuing to optimize for security, accessibility and performance that help improve employee and customer satisfaction. As financial institutions increasingly advance in their respective digital transformations, they’re also now storing information across regions, devices and storage centers that span on-premises and the public cloud. A cloud-managed SD-WAN architecture enables IT leaders to deploy common security policies across networks in order to thwart cyberattacks and maintain security across both private and public clouds.

    Enhancing security both within an organization and at the service edge will require a strong cloud-managed SD-WAN architecture capable of handling increases in connected networks, regions, physical sites, applications and devices. With this in mind, financial institutions will not only stand out from the competition and develop differentiation built on security, but also future-proof their business by building in flexibility and scalability with common, deployable cloud-managed policy.

    Juan Vela is the Global Head of Market Strategy at Cisco Meraki 

    [ad_2]

    Juan Vela

    Source link

  • 3 ways banks can modernize document processing | Bank Automation News

    3 ways banks can modernize document processing | Bank Automation News

    [ad_1]

    Financial institutions looking to mitigate manual data entry and increase efficiencies are turning to automation for their document-heavy operations. Document processing allows banks to capture data and filter it for employee use, compliance needs or audits, Joe Labbe, vice president of product development at intelligent document processing firm KnowledgeLake, said today during the webinar “The […]

    [ad_2]

    Brian Stone

    Source link

  • Computers May Have Cracked the Code to Diagnosing Sepsis

    Computers May Have Cracked the Code to Diagnosing Sepsis

    [ad_1]

    This article was originally published in Undark Magazine.

    Ten years ago, 12-year-old Rory Staunton dove for a ball in gym class and scraped his arm. He woke up the next day with a 104-degree Fahrenheit fever, so his parents took him to the pediatrician and eventually the emergency room. It was just the stomach flu, they were told. Three days later, Rory died of sepsis after bacteria from the scrape infiltrated his blood and triggered organ failure.

    “How does that happen in a modern society?” his father, Ciaran Staunton, asked me.

    Each year in the United States, sepsis kills more than a quarter million people—more than stroke, diabetes, or lung cancer. One reason for all this carnage is that if sepsis is not detected in time, it’s essentially a death sentence. Consequently, much research has focused on catching sepsis early, but the condition’s complexity has plagued existing clinical support systems—electronic tools that use pop-up alerts to improve patient care—with low accuracy and high rates of false alarm.

    That may soon change. Back in July, Johns Hopkins researchers published a trio of studies in Nature Medicine and npj Digital Medicine showcasing an early-warning system that uses artificial intelligence. The system caught 82 percent of sepsis cases and significantly reduced mortality. While AI—in this case, machine learning—has long promised to improve health care, most studies demonstrating its benefits have been conducted using historical data sets. Sources told me that, to the best of their knowledge, when used on patients in real time, no AI algorithm has shown success at scale. Suchi Saria, the director of the Machine Learning and Healthcare Lab at Johns Hopkins University and the senior author of the studies, said in an interview that the novelty of this research is how “AI is implemented at the bedside, used by thousands of providers, and where we’re seeing lives saved.”

    The Targeted Real-Time Early Warning System scans through hospitals’ electronic health records—digital versions of patients’ medical histories—to identify clinical signs that predict sepsis, alert providers about at-risk patients, and facilitate early treatment. Leveraging vast amounts of data, TREWS provides real-time patient insights and a unique level of transparency in its reasoning, according to the Johns Hopkins internal-medicine physician Albert Wu, a co-author of the study.

    Wu says that this system also offers a glimpse into a new age of medical electronization. Since their introduction in the 1960s, electronic health records have reshaped how physicians document clinical information; nowadays, however, these systems primarily serve as “an electronic notepad,” he added. With a series of machine-learning projects on the horizon, both from Johns Hopkins and other groups, Saria says that using electronic records in new ways could transform health-care delivery, providing physicians with an extra set of eyes and ears—and helping them make better decisions.

    It’s an enticing vision, but one in which Saria, the CEO of the company developing TREWS, has a financial stake. This vision also discounts the difficulties of implementing any new medical technology: Providers might be reluctant to trust machine-learning tools, and these systems might not work as well outside controlled research settings. Electronic health records also come with many existing problems, from burying providers under administrative work to risking patient safety because of software glitches.

    Saria is nevertheless optimistic. “The technology exists; the data is there,” she says. “We really need high-quality care-augmentation tools that will allow providers to do more with less.”


    Currently, there’s no single test for sepsis, so health-care providers have to piece together their diagnoses by reviewing a patient’s medical history, conducting a physical exam, running tests, and relying on their own clinical impressions. Given such complexity, over the past decade, doctors have increasingly leaned on electronic health records to help diagnose sepsis, mostly by employing a rules-based criteria—if this, then that.

    One such example, known as the SIRS criteria, says a patient is at risk of sepsis if two of four clinical signs—body temperature, heart rate, breathing rate, white-blood-cell count—are abnormal. This broadness, although helpful for catching the various ways sepsis might present itself, triggers countless false positives. Take a patient with a broken arm: “A computerized system might say, ‘Hey, look, fast heart rate, breathing fast.’ It might throw an alert,” says Cyrus Shariat, an ICU physician at Washington Hospital in California. The patient almost certainly doesn’t have sepsis but would nonetheless trip the alarm.

    These alerts also appear on providers’ computer screens as a pop-up, which forces them to stop whatever they’re doing to respond. So, despite these rules-based systems occasionally reducing mortality, there’s a risk of alert fatigue, where health-care workers start ignoring the flood of irritating reminders. According to M. Michael Shabot, a surgeon and the former chief clinical officer of Memorial Hermann Health System, “It’s like a fire alarm going off all the time. You tend to be desensitized. You don’t pay attention to it.”

    Already, electronic records aren’t particularly popular among doctors. In a 2018 survey, 71 percent of physicians said that the records greatly contribute to burnout, and 69 percent said that they take valuable time away from patients. Another 2016 study found that, for every hour spent on patient care, physicians have to devote two extra hours to electronic health records and desk work. James Adams, the chair of the Department of Emergency Medicine at Northwestern University, calls electronic health records a “congested morass of information.”

    But Adams also says that the health-care industry is at an inflection point to transform the files. An electronic record doesn’t have to simply involve a doctor or nurse putting data in, he says; instead, it “needs to transform to be a clinical-care-delivery tool.” With their universal deployment and real-time patient data, electronic records could warn providers about sepsis and various other conditions—but that will require more than a rules-based approach.

    What doctors need, according to Shabot, is an algorithm that can integrate various streams of clinical information to offer a clearer, more accurate picture when something’s wrong.


    Machine-learning algorithms work by looking for patterns in data to predict a particular outcome, like a patient’s risk of sepsis. Researchers train the algorithms on existing data sets, which helps the algorithms create a model for how that world works and then make predictions on new data sets. The algorithms can also actively adapt and improve over time, without the interference of humans.

    TREWS follows this general mold. The researchers first trained the algorithm on historical electronic-records data so that it could recognize early signs of sepsis. After this testing showed that TREWS could have identified patients with sepsis hours before they actually got treatment, the algorithm was deployed inside hospitals to influence patient care in real time.

    Saria and Wu published three studies on TREWS. The first tried to determine how accurate the system was, whether providers would actually use it, and if use led to earlier sepsis treatment. The second went a step further to see if using TREWS actually reduced patient mortality. And the third interviewed 20 providers who tested the tool on what they thought about machine learning, including what factors facilitate versus hinder trust.

    In these studies, TREWS monitored patients in the emergency department and inpatient wards, scanning through their data—vital signs, lab results, medications, clinical histories, and provider notes—for early signals of sepsis. (Providers could do this themselves, Saria says, but it might take them about 20 to 40 minutes.) If the system suspected organ dysfunction based on its analysis of millions of other data points, it flagged the patient and prompted providers to confirm sepsis, dismiss the alert, or temporarily pause the alert.

    “This is a colleague telling you, based upon data and having reviewed all this person’s chart, why they believe there’s reason for concern,” Saria says. “We very much want our frontline providers to disagree, because they have ultimately their eyes on the patient.” And TREWS continuously learns from these providers’ feedback. Such real-time improvements, as well as the diversity of data TREWS considers, are what distinguish it from other electronic-records tools for sepsis.

    In addition to these functional differences, TREWS doesn’t alert providers with incessant pop-up boxes. Instead, the system uses a more passive approach, with alerts arriving as icons on the patient list that providers can click on later. Initially, Saria was worried this might be too passive: “Providers aren’t going to listen. They’re not going to agree. You’re mostly going to get ignored.” However, clinicians responded to 89 percent of the system’s alerts. One physician interviewed for the third study described TREWS as less “irritating” than the previous rules-based system.

    Saria says that TREWS’s high adoption rate shows that providers will trust AI tools. But Fei Wang, an associate professor of health informatics at Weill Cornell Medicine, is more skeptical about how these findings will hold up if TREWS is deployed more broadly. Although he calls these studies first-of-a-kind and thinks their results are encouraging, he notes that providers can be conservative and resistant to change: “It’s just not easy to convince physicians to use another tool they are not familiar with,” Wang says. Any new system is a burden until proven otherwise. Trust takes time.

    TREWS is further limited because it only knows what’s been inputted into the electronic health record—the system is not actually at the patient’s bedside. As one emergency-department physician put it, in an interview for the third study, the system “can’t help you with what it can’t see.” And even what it can see is filled with missing, faulty, and out-of-date data, according to Wang.

    But Saria says that TREWS’s strengths and limitations complement those of health-care providers. Although the algorithm can analyze massive amounts of clinical data in real time, it will always be limited by the quality and comprehensiveness of the electronic health record. The goal, Saria adds, is not to replace physicians, but to partner with them and augment their capabilities.


    The most impressive aspect of TREWS, according to Zachary Lipton, an assistant professor of machine learning and operations research at Carnegie Mellon University, is not the model’s novelty, but the effort it must have taken to deploy it on 590,736 patients across five hospitals over the course of the study. “In this area, there is a tremendous amount of offline research,” Lipton says, but relatively few studies “actually make it to the level of being deployed widely in a major health system.” It’s so difficult to perform research like this “in the wild,” he adds, because it requires collaborations across various disciplines, from product designers to systems engineers to administrators.

    As such, by demonstrating how well the algorithm worked in a large clinical study, TREWS has joined an exclusive club. But this uniqueness may be fleeting. Duke University’s Sepsis Watch algorithm, for one, is currently being tested across three hospitals following a successful pilot phase, with more data forthcoming. In contrast with TREWS, Sepsis Watch uses a type of machine learning called deep learning. Although this can provide more powerful insights, how the deep-learning algorithm comes to its conclusions is unexplainable—a situation that computer scientists call the black-box problem. The inputs and outputs are visible, but the process in between is impenetrable.

    On the one hand, there’s the question of whether this is really a problem: Doctors don’t always know how drugs work, Adams says, “but at some point, we have to trust what the medicine is doing.” Lithium, for example, is a widely used, effective treatment for bipolar disorder, but nobody really understands exactly how it works. If an AI system is similarly useful, maybe interpretability doesn’t matter.

    Wang suggests that that’s a dangerous conclusion. “How can you confidently say your algorithm is accurate?” he asks. After all, it’s difficult to know anything for sure when a model’s mechanics are a black box. That’s why TREWS, a simpler algorithm that can explain itself, might be a more promising approach. “If you have this set of rules,” Wang says, “people can easily validate that everywhere.”

    Indeed, providers trusted TREWS largely because they could see descriptions of the system’s process. Of the clinicians interviewed, none fully understood machine learning, but that level of comprehension wasn’t necessary.


    In machine learning, although the specific algorithmic design is important, the results have to speak for themselves. By catching 82 percent of sepsis cases and reducing time to antibiotics by 1.85 hours, TREWS ultimately reduced patient deaths. “This tool is, No. 1, very good; No. 2, received well by clinicians; and No. 3, impacts mortality,” Adams says. “That combination makes it very special.”

    However, Shariat, the ICU physician at Washington Hospital in California, was more cautious about these findings. For one, these studies only compared patients with sepsis who had the TREWS alert confirmed within three hours to those who didn’t. “They’re just telling us that this alert system that we’re studying is more effective if someone responds to it,” Shariat says. A more robust approach would have been to conduct a randomized controlled trial—the gold standard of medical research—where half of patients got TREWS in their electronic record while the other half didn’t. Saria says that randomization would have been difficult to do given patient-safety concerns, and Shariat agrees. Even so, he says that the absence “makes the data less rigorous.”

    Shariat also worries that the sheer volume of alerts, with about two out of three being false positives, might contribute to alert fatigue—and potentially overtreatment with fluids and antibiotics, which can lead to serious medical complications such as pulmonary edema and antibiotic resistance. Saria acknowledges that TREWS’s false-positive rate, although lower than that of existing electronic-health-record systems, could certainly improve, but says it will always be crucial for clinicians to continue to use their own judgment.

    The studies also have a conflict of interest: Saria is entitled to revenue distribution from TREWS, as is Johns Hopkins. “If this goes prime time, and they sell it to every hospital, there’s so much money,” Shariat says. “It’s billions and billions of dollars.”

    Saria maintains that these studies went through rigorous internal and external review processes to manage conflicts of interest, and that the vast majority of study authors don’t have a financial stake in this research. Regardless, Shariat says it will be crucial to have independent validation to confirm these findings and ensure the system is truly generalizable.

    The Epic Sepsis Model, a widely used algorithm that scans through electronic records but doesn’t use machine learning, is a cautionary example here, according to David Bates, the chief of general internal medicine at Brigham and Women’s Hospital. He explains that the model was developed at a few health systems with promising results before being deployed at hundreds of others. The model then deteriorated, missing two-thirds of patients with sepsis and having a concerningly high false-positive rate. “You can’t really predict how much the performance is going to degrade,” Bates says, “without actually going and looking.”

    Despite the potential drawbacks, Orlaith Staunton, Rory’s mother, told me that TREWS could have saved her son’s life. “There was complete breakdown in my son’s situation,” she said; none of his clinicians considered sepsis until it was too late. An early-warning system that alerted them about the condition, she added, “would make the world of difference.”

    After Rory’s death, the Stauntons started the organization End Sepsis to ensure that no other family would have to go through their pain. In part because of their efforts, New York State mandated that hospitals develop sepsis protocols, and the CDC launched a sepsis-education campaign. But none of this will ever bring back Rory, Ciaran Staunton said: “We will never be happy again.”

    This research is personal for Saria as well. Almost a decade ago, her nephew died of sepsis. By the time it was discovered, there was nothing his doctors could do. “It all happened too quickly, and we lost him,” she says. That’s precisely why early detection is so important—life and death can be mere minutes away. “Last year, we flew helicopters on Mars,” Saria says, “but we’re still freaking killing patients every day.”

    [ad_2]

    Simar Bajaj

    Source link

  • Butlr Technologies Awarded as Technology Pioneer by World Economic Forum

    Butlr Technologies Awarded as Technology Pioneer by World Economic Forum

    [ad_1]

    The World Economic Forum announced its selection of the 100 most promising Technology Pioneers of 2022 – companies that are tackling issues from sustainability and climate change to healthcare and more. This year’s cohort includes representation from 30 economies on six continents with reach far beyond traditional tech hubs like Silicon Valley; Butlr Technologies Inc. operates an anonymous people-sensing platform using AI-powered thermal sensors.

    Press Release


    May 10, 2022

    Butlr Technologies, a MIT Media Lab spinout founded by Forbes 30 Under 30 entrepreneurs Jiani Zeng and Honghao Deng, was selected among hundreds of candidates as a “Technology Pioneer” by the World Economic Forum. Butlr was recognized for its breakthrough people-sensing platform, which uses thermal sensors and machine learning technology to accurately and anonymously understand human presence and activity indoors.

    The spatial insights, occupancy and activity analytics derived from Butlr’s platform are used to make data-driven real estate decisions, to improve the occupant experience and run buildings more efficiently. In addition, Butlr’s sensors, incapable of capturing PII, are being deployed in senior living care settings for remote monitoring and fall detection applications.

    Leveraging the robust and open API, alongside an overall integration-friendly system architecture, Butlr is part of a large network of innovative partners like GP Pro and Anders + Kern—a cohort that share Butlr’s vision of advancing the workplace of tomorrow in a scalable, cost-effective and private manner. Integrations as such allow joint users to optimize the operation and cleaning schedules of their space, provide safer environments to tenants around the globe and access a rich pool of occupancy data through user-friendly and insightful dashboards.

    “It is humbling to be selected as a Technology Pioneer alongside some of the world’s leading innovators,” said Honghao Deng, co-founder and CEO of Butlr. “We envision a world in which the built environment is responsive to its inhabitants, leading to smarter buildings and to people leading richer, healthier lives. We’re delighted that our work is gaining global attention and look forward to contributing to the Forum’s dialogues in this arena.”

    “We’re excited to welcome Butlr.io to our 2022 cohort of Technology Pioneers,” says Saemoon Yoon, Community Lead, Technology Pioneers, World Economic Forum. “Butlr and its fellow pioneers are at the forefront of industries that are critical to solving some of our world’s most complex issues today. We look forward to their contribution to the World Economic Forum in its commitment to improving the state of the world.”

    Technology Pioneers have been selected based on the community’s selection criteria, which includes innovation, impact and leadership as well as the company’s relevance with the World Economic Forum’s Platforms.

    The diversity of these companies extends to their innovations as well. This year’s Tech Pioneer firms are shaping the future by advancing technologies such as AI, IoT, robotics, blockchain, biotechnology and many more. The full list of Technology Pioneers can be found here.

    All info on this year’s Technology Pioneers can be found here: http://wef.ch/techpioneers22

    More information on past winners, information on the community and the application link can be found here.

    About Butlr: Spun out of the MIT Media Lab in 2019 with offices in Silicon Valley and Boston, Butlr was founded by Forbes 30 Under 30 entrepreneurs Honghao Deng and Jiani Zeng with a mission to make the built environment people-aware. Butlr’s People Sensing Platform anonymously infers human presence and activity via its thermal, wireless sensors to deliver rich spatial insights at a fraction of the cost and time of legacy alternatives. Since the recent launch of its platform in late 2021, the company is already working with dozens of top occupiers, landlords and service providers in North America, Europe and Asia, in addition to partners in senior living and retail.

    About World Economic Forum: The World Economic Forum, committed to improving the state of the world, is the International Organization for Public-Private Cooperation. The Forum engages the foremost political, business and other leaders of society to shape global, regional and industry agendas. (www.weforum.org).

    About the Technology Pioneers:

    The World Economic Forum believes that innovation is critical to the future well-being of society and to driving economic growth. Launched in 2000, the Technology Pioneer community is composed of early to growth-stage companies from around the world that are involved in the design, development and deployment of new technologies and innovations, and are poised to have a significant impact on business and society.

    The World Economic Forum provides the Technology Pioneers community with a platform to engage with the public- and private-sector leaders and to contribute new solutions to overcome the current crisis and build future resiliency.

    Media contact:

    Ioanna Sotiriou
    ioanna@butlr.io

    Source: Butlr Technologies

    [ad_2]

    Source link

  • A Machine Learning Company in California Using Quantum Computers at Mathlabs Ventures is Building the First Q40 ME Fusion Energy Generator Using Advanced AI & Neural Networks

    A Machine Learning Company in California Using Quantum Computers at Mathlabs Ventures is Building the First Q40 ME Fusion Energy Generator Using Advanced AI & Neural Networks

    [ad_1]

    Harvard Mathematicians using Artificial Intelligence, Machine Learning, Blockchain and Neural Networks on a Quantum Computer have developed breakthrough algorithms and simulations that will enable the world’s most efficient Fusion Energy Power Plants to be opened 20 years earlier than planned with a Q40 Mechanical Gain by Kronos Fusion Energy Algorithms

    Press Release


    Jan 10, 2022

    Kronos Fusion Energy Algorithms LLC (KFEA-Q40) and MathLabs Ventures announced today that after 60 years of global research, the Fusion Energy industry is now poised to accelerate their growth rapidly to build commercially viable power plants 20 years earlier than planned because of three recent major advances in technology. The three major problems with reaching commercial success in Fusion Energy have recently been overcome with these three new technological advancements that together will make it possible to build efficient Fusion Energy Power Plants on Earth by the mid-2030s. These innovations, ongoing contracts & patents put KFEA’s current valuation at $530m with $1.2B in projected earnings over the next 2 years.

    “We at Kronos are building a world-class team of mathematicians, physicists, scientists and other professionals whose mission is to reverse global warming by helping to make  Fusion Energy commercially viable in the near future,” said Michael Pierce Hoban, the CEO of Kronos Fusion Energy Algorithms

    Recreating the power of the sun on earth in a controlled manner takes computing power, machine learning, artificial intelligence, blockchain, quantum computers, neural networks, and other technological advances that were not even dreamed of 60 years ago when Fusion Energy research began globally. But now, with these three technological breakthroughs, the global competition to design the next-generation Fusion Energy Power Plants that are more efficient than today’s carbon-burning power plants is underway in full swing.

    The first technological barrier that was overcome is that the computing power now exists to model the sun in simulations more accurately with the launch of the Summit Supercomputer in Oak Ridge that set the world record in 2018, and in June 2021, Japan’s Fugaka Supercomputer set a new world record of 422 petaflops.

    The second technological barrier that was overcome in September 2021 was the announcement of the most powerful magnet ever created on earth (https://news.mit.edu/2021/MIT-CFS-major-advance-toward-fusion-energy-0908). This is the first magnet with enough power capable of containing a fast-moving plasma field at heats in excess of 150M degrees Celsius without touching and melting the containment barrier.

    The third technological barrier that has been the most difficult to overcome is the 1% efficiency rate (Q1 Mechanical Gain) of the top fusion energy demo reactors on earth today. The first two breakthroughs will enable the world’s top Fusion energy designers to reach a 25% efficiency rate (Q25 Mechanical Gain) by 2050. This has been a major technological barrier because there has been no fusion energy reactor solution that has been proposed in the world that exceeds 25% efficiency until now.

    Kronos Fusion Energy Algorithms LLC announced that after five years studying the global research in Fusion Energy, we have developed advanced algorithms and simulations to achieve a 40% efficiency rate (Q40 Mechanical Gain) for Commercial Fusion Energy Power Plants that will enable a 20-year advancement in the launch dates of the world’s first Fusion Energy Power Plants that are more efficient than today’s carbon burning power plants. Our algorithms and simulations use Artificial Intelligence, Machine Learning, neural networks, blockchain, quantum computing and other advances to reduce the error rate at a Fusion Energy Reactor from the 15% error rate experienced today at the International Thermodynamic Experimental Reactor (ITER) in France to a 1% error rate after our simulations have optimized the numerous variables to identify the disruptions that cause 31% of the maintenance shutdowns at ITER.

    Kronos Fusion Energy Algorithms: Developing ALGORITHMS & SIMULATIONS to build Micro Fusion Energy Generators with Q40 Mechanical Gain for a CLEAN + LIMITLESS Energy Future

    MEDIA CONTACT:

    PRIYANCA FORD  

    Founder & Chief Strategy Officer at Kronos Fusion Energy Algorithms

    Priyanca_Ford@post.harvard.edu

    Source: MathLabs Ventures

    [ad_2]

    Source link

  • Rate Highway and Perfect Price Partner to Deliver Artificial Intelligence for Car Rental Pricing

    Rate Highway and Perfect Price Partner to Deliver Artificial Intelligence for Car Rental Pricing

    [ad_1]

    Press Release



    updated: Jun 2, 2017

    RateHighway, the leading provider of automated rate positioning technology for the global auto rental industry, has partnered with Perfect Price, the leader in artificial intelligence for revenue management and price optimization, to deliver the first artificial intelligence solution to car rental companies. 

    “We are delighted to offer this groundbreaking, first-of-its-kind capability to our customers,” said Michael Meyer, President, RateHighway. “In today’s exceedingly competitive car rental environment, driving rental profitability is more important than ever.”

    “For decades, companies shot from the hip on pricing, Then automation made better rate positioning possible. But without rigor and oversight, it can result in a race to the bottom. Artificial intelligence represents a new way to recapture time, revenue and profit while increasing growth.”

    Alex Shartsis, CEO, Perfect Price

    The comprehensive pricing solution combines the leading rate automation technology Rate-Highway has provided since 2002, and the revolutionary artificial intelligence capabilities Perfect Price brings to the industry from Microsoft, Twitter and the FERMI nuclear physics laboratory.

    “I was amazed by how quickly AI improved our business,” said Sharky Laguana, CEO, Bandago and Board Member, American Car Rental Association. “We have seen both utilization and revenue per unit climb measurably in cities where we use Perfect Price, while staying the same in cities where we left our old pricing model in place.”

    “For decades, companies shot from the hip on pricing,” said Alex Shartsis, CEO, Perfect Price. “Then automation made better rate positioning possible. But without rigor and oversight, it can result in a race to the bottom. Artificial intelligence represents a new way to recapture time, revenue and profit while increasing growth.”

    “With this partnership, we build on RateHighway’s transformative automation, which has delivered windfalls for its customers. Together, we enable the next level of business excellence through artificial intelligence — for nearly any car rental business, not just the majors,” said Mr. Meyer.

    About RateHighway, Inc.

    RateHighway is the leading provider of revenue management technology for the global auto rental industry. RateHighway has been providing web rate gathering technology to the travel industry since 1998 and introduced the all-inclusive, ground-breaking RateMonitor(r) automated rate positioning product to the auto rental industry in 2004. RateMonitor is a full cycle rate gathering, analysis, and correction solution that can ensure your fleet is always competitively priced. For more information, contact sales@ratehighway.com.

    About Perfect Price, Inc.

    Perfect Price is the leader in artificial intelligence for revenue management and price optimization. Headquartered in San Francisco, Perfect Price serves global customers with the most advanced artificial intelligence and machine learning based solutions in a software as a service model. For more information, contact press@perfectprice.com.

    Source: Rate-Highway, Inc.

    [ad_2]

    Source link

  • RateMonitor® Now Even More Powerful With the Addition of Artificial Intelligence

    RateMonitor® Now Even More Powerful With the Addition of Artificial Intelligence

    [ad_1]

    RateMonitor delivers advanced A.I. capabilities, empowering car rental companies to price their vehicles based on predictive analysis

    Press Release



    updated: May 9, 2017

    Rate-Highway, the leading provider in Revenue Management software for the global car rental industry, is proud to announce that its flagship product, RateMonitor® now has artificial intelligence (A.I.) integrated into the platform, making RateMonitor not only the world’s most powerful, but now also the smartest revenue management system in the car rental industry. 

    RateMonitor continues its breakthrough innovation streak by coupling advanced A.I. capabilities with the leading revenue management system — basing car rental rates on local events, competitor pricing, current utilization, historical data, forecasts and more — all reviewed automatically using machine learning. It’s so effortless that anyone will now be able to use A.I.-powered rules that get smarter with every interaction.

    With Rate-Highway’s RateMonitor, we are delivering the world’s smartest Revenue Management tool for the car rental industry. RateMonitor makes it easy for everyone to take advantage of best-in-class A.I. capabilities when pricing their vehicles.

    Chris Mitrision, Product Manager, Rate-Highway

    About Rate-Highway

    Rate-Highway is the leading provider of automated rate positioning technology for the global auto rental industry. Rate-Highway has been providing rate gathering technology to the auto rental industry since 2002 and introduced the all-inclusive, ground-breaking RateMonitor automated rate positioning product to the auto rental industry in 2004. RateMonitor is a full cycle rate gathering, analysis, and correction solution that ensures client fleets are always competitively priced and result in the highest revenue possible.

    Source: Rate-Highway, Inc.

    [ad_2]

    Source link