-
Investors Bet $235 Million on Bringing AI to Scientific Research
The field of scientific research is on the verge of a monumental transformation, with significant investments pouring into technologies that aim to accelerate discovery. One of the most exciting developments in this area is the recent investment in Lila Sciences, a biotechnology startup utilizing artificial intelligence to revolutionize how scientific research is conducted.
Lila Sciences emerged from stealth mode earlier this year, and it has quickly captured the attention of investors and researchers alike. The company announced that it has raised a remarkable $235 million, valuing it at approximately $1.23 billion. This funding will enable the company to expand its operations, focusing on creating dedicated laboratories, referred to as “AI science factories,” that will integrate human researchers and advanced AI to expedite the scientific discovery process.
Founded in 2023, Lila Sciences aims to enhance the speed and efficiency with which new materials and drugs are discovered. By leveraging AI algorithms trained on vast amounts of academic literature across disciplines such as materials science, chemistry, and life sciences, the company is poised to create groundbreaking innovations. The AI tools they develop are intended to not only design experiments but also to learn from the data generated, creating a feedback loop that will ultimately lead to faster and more effective research outcomes.
According to Geoffrey von Maltzahn, the co-founder and CEO of Lila Sciences, traditional research methods are inherently slow. Scientists typically formulate hypotheses, gather data, conduct experiments, and refine their results over extended periods—often years. Lila’s approach aims to disrupt this paradigm by dramatically reducing the time required to discover new scientific insights.
The feedback loop that Lila promotes is critical in reaching discoveries that may take significantly longer or be impossible to achieve through conventional means. This is particularly relevant when considering the existing limitations of research that rely solely on publicly available data, which von Maltzahn suggests can inhibit progress due to diminishing returns.
The startup’s innovative approach does not just aim to reform scientific methodologies but also promises to contribute to the development of essential technologies, such as novel materials for carbon capture and new pharmaceutical compounds. Such advancements could have profound implications for industries ranging from environmental technology to healthcare.
Lila Sciences is not alone in its quest; numerous companies like Orbital Materials and Isomorphic Labs are also exploring the integration of AI into scientific research. However, Lila’s commitment to establishing automated labs, populated with AI systems working alongside human expertise, could give it a vital competitive edge in this rapidly evolving landscape.
While Lila has yet to commercialize any of its products, the interest from other firms seeking to utilize its unique capabilities is a promising sign of its potential impact on the industry. Von Maltzahn indicated that the company plans to open its platform to external partners by the end of the year, a move that could facilitate further collaborations and accelerate the pace of innovation in the life sciences sector.
As the influx of investment reflects, the appetite for technology-driven solutions in scientific research is high. Investors are betting that Lila Sciences will successfully harness AI to not only transform the process of research but potentially unlock discoveries that have been elusive under traditional methodologies. The future of scientific research may very well hinge on the advancements made by companies like Lila, as they push the boundaries of what is possible with AI.
In conclusion, Lila Sciences represents a significant step forward in the integration of AI into scientific research. With substantial funding and an innovative approach, it has the potential to redefine how we conduct research, accelerate discovery, and ultimately bring groundbreaking technologies to market.
-
Cloud Hypervisor Will Block AI Generated Code, Raises x86_64 VM Limit To 8,192 vCPUs
The release of Cloud Hypervisor 48.0 marks a significant milestone for this innovative project initiated by Intel, focused on enhancing virtualization technology to support modern cloud workloads. As an open-source and Rust-based Virtual Machine Monitor (VMM), Cloud Hypervisor has continually adapted to the demands of both Windows and Linux environments, emphasizing security and efficiency in cloud-native applications. With this latest version, various features and enhancements have been introduced, setting a new standard for cloud virtualization.
One of the standout features of this release is the monumental increase in the x86_64 KVM vCPU limit. Previously capped at just 254 vCPUs, the latest iteration elevates this limit to an astonishing 8,192 vCPUs. This increase is particularly noteworthy for organizations that rely on resource-intensive applications and high-performance computing environments, showcasing Cloud Hypervisor’s commitment to scaling up performance without compromising on stability.
Moreover, the introduction of experimental support for the “fw_cfg” device allows users to pass configuration data and files seamlessly between the host and guest systems. This capability is vital for managing VM boot configurations, thereby enhancing the overall user experience in cloud operations. Additionally, a new feature enabling Inter-VM Shared Memory (ivshmem) can potentially optimize memory management among VMs, which is crucial for multi-tenant cloud environments.
Improved block performance is another highlight, especially when managing smaller block sizes of 16KB or less. This optimization contributes to enhanced data throughput, a necessary attribute for applications leveraging high-frequency transactions or big data analytics. Furthermore, the quicker VM pause operations, especially with larger vCPU counts, enhance the manageability of virtual machines, ensuring that enterprises can maintain operational fluidity even during performance heavy tasks.
An interesting change in this release is the decision to disable Intel SGX support, underlining a clear direction for the project’s future. While the rationale behind this deprecation wasn’t explicitly detailed, it may reflect a shift towards prioritizing other security approaches more aligned with Cloud Hypervisor’s vision.
However, perhaps the most controversial element of Cloud Hypervisor 48.0 is its newly established policy against contributions that include AI-generated code. Following the trend of software development influenced by Large Language Models (LLMs), the project has proactively declared that it will reject any code known to derive from AI sources. This decision is likely aimed at preserving code quality and ensuring that contributions maintain the robust security and performance standards that the project championed since its inception.
This move sparks conversation within the wider tech community regarding the balance between leveraging AI for productivity against the risks of dependency on AI-generated outputs. As more projects navigate the complexities of artificial intelligence in development workflows, Cloud Hypervisor’s stance offers a unique perspective on navigating this evolving landscape, prioritizing human oversight over automated code generation.
The implications of this release extend beyond immediate technical advancements; they touch on essential aspects of cloud architecture, resource management, and software development ethics. Decision-makers, product builders, and investors should closely monitor how Cloud Hypervisor’s developments might influence the future of virtualization and cloud-native applications.
For those looking to dive deeper into the specifics of Cloud Hypervisor 48.0, further details and downloads can be found on GitHub. This release not only signifies progress but also sets a precedent for the intersection of effective cloud solutions and ethical coding practices.
-
From soil to systems: How AI enterprises are transforming the agriculture ecosystem
Indian agriculture is undergoing a profound transformation as it grapples with numerous challenges, from unpredictable weather patterns to rising input costs and water scarcity. The demand for reliable and affordable food continues to escalate, placing immense pressure on the farming community. Traditionally, agricultural decisions have relied on the wisdom of experience and local advice. However, in today’s digital age, this knowledge is being revolutionized by the integration of Artificial Intelligence (AI), shifting the agricultural sector from intuition-driven methods to evidence-based strategies.
AI is not merely modernizing farming practices; it is reshaping the entire agricultural ecosystem — from soil health management and irrigation strategies to logistics, finance, and retail processes. This holistic approach ensures farming is more efficient, sustainable, and capable of meeting the needs of a growing population.
Data as the New Fertilizer
At the heart of this transformation is data. Low-cost soil sensors play a pivotal role in tracking essential parameters such as moisture levels, temperature, and pH throughout the growing season. AI models leverage these readings to provide actionable insights, recommending optimal irrigation schedules, detecting nutrient deficiencies, and advising crop rotation practices that promote soil health. Farmers are now equipped with real-time alerts on the irrigation needs of their fields, significantly reducing unnecessary water usage and preventing root diseases.
Furthermore, technology extends beyond the farm fields. Drones and satellites contribute valuable aerial data, utilizing computer vision to assess plant health through color patterns and canopy structure. This technological advancement dramatically reduces the time required for field assessments from an entire day to merely a ten-minute review via a mobile device, facilitating targeted interventions when necessary.
AI in Crop Management
The benefits of AI in agriculture extend into crop management, particularly in disease and pest identification. Early detection is crucial, as not all diseases manifest simultaneously. By utilizing phone cameras or drone footage processed through AI models, farmers can receive likely diagnoses of diseases such as leaf spots and rust, along with a concise list of approved treatments, including the proper dosages. The same technology is applicable in pest management, helping maximize crop yields by addressing threats before they escalate.
Water management is another significant area where AI contributes. Smart irrigation controllers utilize forecasts, evapotranspiration rates, and soil sensor data to optimize watering schedules, ensuring that crops receive the necessary hydration without excess. This efficiency results in a lower risk of over-saturation, reduced energy costs for water pumps, and improved yields during periods of drought.
Harvesting Efficiency through Robotics
As harvest season approaches, the agricultural workforce often experiences strains due to labour shortages. AI-powered solutions, including automated harvesters, sprayers, and robotic pickers, alleviate this pressure. These technologies do not eliminate jobs entirely but provide essential support when seasonal workers are hard to find or costs are prohibitive. After the harvest, computer vision systems enable the rapid sorting and grading of produce based on size, shape, and surface quality, resulting in consistent product grading that attracts higher prices and minimizes disputes with buyers.
Sustainability and Climate Resilience
AI’s influence extends to sustainability practices, as well. By recommending the minimum effective doses of fertilizers and pesticides and applying these inputs precisely where needed, farmers can not only save money but also protect soil organisms and nearby water resources. Precise irrigation schedules foster healthy aquifers, while season planners suggest climate-resilient crop varieties and sowing times aligned with local weather conditions. Furthermore, extensive data on residue management, cover crops, and reduced tillage practices enhance soil health and bolster carbon retention.
In summary, AI is leading a groundbreaking shift in the agricultural landscape, forming a data-driven ecosystem that supports both efficiency and resilience. This transformation not only enhances productivity but also aligns farming practices with sustainability goals, ensuring that agriculture can adapt to future challenges while continuing to provide essential food supplies for the population.
-
uTrade Launches a Unified Platform Combining AI, Copy Trading, NFTs, and DeFi into One
The financial landscape is evolving rapidly, and uTrade is at the forefront of this transformation with the launch of a groundbreaking unified trading ecosystem. Announced on September 12, 2025, in Miami, Florida, this platform aims to seamlessly integrate artificial intelligence (AI), copy trading, non-fungible tokens (NFTs), staking, and decentralized finance (DeFi) into one cohesive experience. This initiative aspires to cater to both novice and seasoned investors, providing them with sophisticated tools that meld institutional-grade trading strategies with community-driven economic empowerment.
Unlike many cryptocurrency platforms that operate in narrowly defined niches, uTrade has positioned itself as a robust multi-layered ecosystem. This structure incorporates various components such as automated trading bots, profit-sharing NFTs, a deflationary token model, and mechanisms that prioritize community benefits. The ultimate goal is to foster a transparent and sustainable financial environment accessible to all.
uTrade’s Vision: Bridging Traditional Trading and Decentralized Finance
In its whitepaper, uTrade articulates its mission as crafting “a gateway to the financial trading revolution.” By merging the precision of traditional trading tools with the egalitarian ethos of decentralized finance, uTrade endeavors to create a platform that is more than merely transactional. It reflects a commitment to democratizing access to advanced financial strategies that were previously the reserve of sophisticated investors and institutions.
Traditional trading often relies on intricate software and heavily guarded ecosystems, posing a barrier for many potential investors. uTrade aims to dismantle these hurdles, making complex trading strategies accessible to a broader audience. By embedding DeFi principles—such as profit sharing, token burns, and strategic treasury reinvestment—into its operations, the platform ensures that growth benefits the community rather than a select few at the top.
AI Trading Bots and Advanced Automation for Everyday Investors
At the heart of uTrade’s approach are AI-driven trading bots capable of dynamically adjusting to evolving market conditions. For instance, the Futures Grid Bot—utilized on the Pionex exchange—executes buy and sell orders based on price ranges defined by the user. This trading strategy proves especially advantageous in lateral markets, where price movements create numerous opportunities for incremental gains.
Additionally, uTrade’s AI trading bots are programmed to analyze a vast array of indicators and data sources, such as Relative Strength Index (RSI), Moving Average Convergence Divergence (MACD), market supply-demand fluctuations, macroeconomic insights, and institutional trading behaviors. By mimicking the adaptability of seasoned human traders, these bots aim to consistently optimize trading outcomes.
This means that retail investors can leverage sophisticated strategies without needing extensive technical expertise or programming skills. By operating continuously, the bots remove the necessity for constant user engagement, offering a level of automation that benefits both proactive traders as well as those with limited time to engage directly in the markets.
Copy Trading: Verified Experts and Accessible Trading for All
Another standout component of the uTrade platform is its copy trading feature, which empowers users to automatically replicate the trades of verified expert traders. This mechanism not only democratizes access to high-level trading strategies but also enhances user confidence by allowing investors to align themselves with proven market performers. By democratizing sophisticated financial strategies, uTrade is creating an environment that levels the playing field for investors of all experience levels.
By integrating these advanced features into a single platform, uTrade exemplifies how technology and decentralization intersect to forge more equitable trading environments. This initiative promises to change the way that individuals engage with financial markets, fostering both innovation and inclusiveness in the investment sphere.
As the platform continues to evolve, the implications for both individual investors and the broader financial ecosystem are profound, suggesting that uTrade is not merely a new trading platform but a significant step forward in the evolution of finance.
-
Hostinger’s AI agent redefines client support, saves over €9 million a year
In a striking advancement in customer service technology, Hostinger has revolutionized its AI chat assistant, Kodee, leading to significant savings and enhanced customer satisfaction. As of September 12, 2025, this cutting-edge agent is credited with saving the company over €9 million annually in operational costs, a feat achieved by automating routine client interactions.
The transformation of Kodee is not just a superficial upgrade; it marks a crucial evolution from a simple Q&A tool to an intelligent agent capable of executing real tasks. Where Kodee once provided answers, it now undertakes entire processes for clients. This includes critical activities such as website migration, optimizing website speed, and implementing security measures, all done seamlessly with simple instructions from users. Giedrius Zakaitis, Hostinger’s Chief Product and Technology Officer, described this transition as pivotal, stating, “The results speak for themselves.”
At its core, Kodee’s newly enhanced capabilities boast an impressive achievement: fully resolving 75% of conversations without the need for human intervention. In August alone, the AI engaged in 750,000 conversations, a jump from the previously recorded resolution rate of 50% at the beginning of the year. Notably, when Kodee is allowed to take action, both customer satisfaction and resolution rates see an uptick of approximately 4% compared to cases where it only provides information.
Streamlining Client Support
Hostinger’s primary objective revolves around providing rapid and efficient client support. The company has set a benchmark to respond to all inquiries within two minutes, a goal made more attainable through Kodee’s innovative approach. Unlike human specialists, who can only handle a set number of inquiries at any time, Kodee manages unlimited conversations simultaneously, drastically reducing response times.
With Kodee’s improvements, the average time clients wait for help has plummeted to a mere 9 seconds, down from 28 seconds just a year prior. This extraordinary reduction means that 93% of all support requests are now resolved in under 2 minutes, a significant rise from the previous year’s 77% resolution rate.
Navigating Unprecedented Growth
The necessity to adapt has never been clearer as Hostinger’s customer inquiries have doubled in volume over the last four years. Reflecting on 2025 alone, the platform observed almost as many inquiries within eight months as it did throughout all of 2024. This surge correlates with Hostinger’s growing customer base, now exceeding 4 million, coupled with Kodee’s expanding suite of capabilities that allow users to efficiently address their needs.
Against this backdrop of increasing demand, the company has strategically reduced its customer success teams by 20%, resulting in a total of 312 specialists. Zakaitis reassured that those roles weren’t lost to AI but that staff members were reassigned to more complex roles, thus putting their skills to better use, while Kodee tackled repetitive tasks.
The Individual Touch with AI
A notable aspect of Kodee’s functionality is its ability to communicate in over 50 languages, catering to a diverse and global clientele. This multilingual capability ensures that clients can receive assistance in their native tongues, further enriching the customer experience. Hostinger’s commitment to maintaining a high-quality support experience is clear, as they balance intelligent automation with human expertise.
As companies increasingly invest in AI technologies, Hostinger’s experience highlights a potent example of how artificial intelligence can enhance, rather than replace, human roles in customer service. By automating routine inquiries, Kodee frees up valuable time for specialists to focus on resolving complex issues, ultimately leading to improved service delivery.
In conclusion, Hostinger’s Kodee exemplifies the transformative potential of AI in business. Through substantial cost savings, improved response times, and increased customer satisfaction, Kodee showcases the future of client support — one where cutting-edge technology and human ingenuity work hand in hand.
-
Could AI nursing robots help healthcare staffing shortages? | CNN Business
The global healthcare sector is currently grappling with a staggering shortage of labor, projected to reach a deficit of 4.5 million nurses by 2030, according to the World Health Organization (WHO). This shortage has created immense pressure on the existing workforce, with approximately one-third of nurses worldwide suffering from symptoms of burnout, including emotional exhaustion. The situation is dire, exacerbated by a historically high turnover rate within the nursing profession. However, emerging technology, particularly autonomous AI-powered robots like Nurabot, is presenting a potentially transformative solution to this crisis.
Developed by Foxconn, the multinational technology giant from Taiwan, Nurabot is an innovative nursing assistant designed to alleviate the burden on nurses by handling repetitive and physically demanding tasks. These may include activities such as medication delivery and guiding patients throughout the hospital facilities. Foxconn claims that as nurses integrate Nurabot into their workflow, they could potentially reduce their workload by as much as 30%. As Alice Lin, the director of user design at Foxconn, explains, “This is not a replacement of nurses, but more like accomplishing a mission together.” This collaborative approach aims to enhance the quality of patient care by allowing nurses to focus on more critical tasks that require human judgment and expertise.
Nurabot is currently being tested at the Taichung Veterans General Hospital in Taiwan and has been in development for just 10 months following its initial design. Testing began back in April 2025, and Foxconn is gearing up for a commercial launch anticipated at the beginning of next year. Although the company has not released pricing details yet, the implications of this technology could be monumental in addressing staffing shortages in healthcare facilities worldwide.
At the heart of Nurabot’s design is a collaboration between Foxconn and Kawasaki Heavy Industries, a well-known Japanese robotics firm. The advantages of this partnership are evident in Nurabot’s autonomous wheel movements and the ability to perform tasks with its robotic arms. To better suit the context of nursing, Foxconn conducted extensive research to identify the specific challenges nurses face in their daily routines, such as traversing long distances to deliver samples. As a result, Nurabot includes a compartment specifically designed for safely delivering medication and other important supplies between the nurses’ station and patient rooms.
The AI technology powering Nurabot stems from a combination of resources, including Foxconn’s own Chinese large language model for communication functionalities and cutting-edge AI infrastructure provided by NVIDIA. The American tech giant played an essential role in developing Nurabot’s core programming, employing a mixture of proprietary AI platforms that enable the robot to navigate hospital environments independently, as well as schedule tasks and respond to both verbal and physical cues of human staff.
Moreover, AI training and testing were conducted using a virtual environment representative of the hospital, significantly accelerating the development process. The advancements in artificial intelligence have allowed Nurabot to operate with a greater degree of autonomy and human-like behavior, enabling it to perceive, reason, and react adaptively to varying situations within a healthcare setting. David Niewolny, director of business development for health care and medical at NVIDIA, emphasizes that such capabilities allow Nurabot to adjust its actions based on specific patient conditions and contextual needs.
As Nurabot moves closer to its market launch, the pivotal question remains: will AI nursing robots like Nurabot serve as a help or hindrance in the healthcare landscape? While the technology promises to streamline operations and alleviate some of the burdens on human staff, concerns about the implications of increased automation could arise. Thus, industry leaders and stakeholders must closely observe not only the potential efficiencies created by technologies like Nurabot but also the ethical considerations and impacts on nursing professionals.
In summary, as the nursing profession faces unprecedented challenges, the development and integration of AI-powered solutions like Nurabot represent a significant step towards addressing the pressing issue of staffing shortages. As we move forward into an era where technology plays an increasingly prominent role in healthcare delivery, understanding the collaboration between AI and human professionals will be crucial in shaping a more effective and efficient healthcare system.
-
AMA releases CPT 2026 code set, adds codes for health AI
The American Medical Association (AMA) has recently unveiled the 2026 update to the Current Procedural Terminology (CPT) code set, introducing significant advancements that will enhance the documentation of technology-enabled healthcare services. This comprehensive update includes 418 changes, comprising 288 new codes, 84 deletions, and 46 revisions designed to reflect the growing intersection of healthcare and technology.
A standout feature of this update is the introduction of numerous codes specifically aimed at documenting digital health services. Among these updates, healthcare providers will now be able to accurately record shorter duration remote patient monitoring, specifically for periods lasting from two to 15 days within a 30-day timeframe. This adjustment allows for more nuanced tracking of patient care and remote monitoring management through two new codes that will outline services based on just 10 minutes of monitoring per calendar month.
Moreover, the 2026 CPT code set includes groundbreaking codes for health AI services that are designed to augment physician capabilities while improving patient care. Notably, the codes for AI-assisted assessments will cover innovative applications like the evaluation of coronary atherosclerotic plaque and perivascular fat analysis for cardiac risk assessments. These developments exemplify how the medical field is increasingly integrating artificial intelligence to enhance diagnostic accuracy and patient outcomes.
In addition to these innovations, the AMA has recognized the need for codes that support emerging digital technologies, specifically targeting services linked to multi-spectral imaging for burn wounds and tools for the detection of cardiac dysfunction. These efforts indicate a clear acknowledgment of the evolving landscape of healthcare and the significant role that advanced technology plays in patient management and treatment solutions.
The updated code set also includes 12 new codes related to hearing device services, which encompass training and support for patients utilizing personal devices to connect with their hearing aids. This addition establishes a more holistic understanding of patient engagement and care continuity through the integration of supportive technology.
Effective January 1, 2026, the new Category I CPT codes will be crucial for medical professionals as they navigate claims processes and reimbursements, especially under programs like Medicare. The uniform language of the CPT code set is essential for standardizing medical documentation across the healthcare system, ensuring that services are accurately captured and reimbursed. This structured approach is vital not only for patient care but also for the financial sustainability of healthcare practices.
However, recent statements from the Centers for Medicare & Medicaid Services (CMS) reveal a shift in their methodology regarding how they utilize CPT codes in determining the Medicare Physician Fee Schedule. CMS has expressed concerns about the reliance on the Relative Value Scale Update Committee (RUC), citing issues related to potential biases in the survey data collected for setting payment rates. This shift may impact how healthcare professionals approach the documentation and coding for services moving forward.
The introduction of AI-related CPT codes in this update signals the AMA’s proactive stance on the integration of technology in healthcare, ensuring that such innovations are formally recognized within the coding framework. This not only helps in streamlining the documentation process but also paves the way for reimbursement structures that fairly reflect the value of AI-enhanced medical services. As the healthcare sector continues to embrace advanced technologies, staying informed about coding changes will be paramount for physicians, practices, and organizations aiming to leverage innovations for improved patient care.
-
‘Brain-like’ AI uses Chinese chips to run 100 times faster on ultra-long tasks
A groundbreaking development has emerged from China: a visionary team has unveiled what it is touting as the world’s first “brain-like” large language model, a significant leap in artificial intelligence technology. Developed by researchers at the Chinese Academy of Sciences’ Institute of Automation in Beijing, this innovative system, named SpikingBrain 1.0, is designed to operate with reduced energy consumption while achieving superior performance without relying on Nvidia chips.
The concept behind SpikingBrain 1.0 is rooted in mimicking the natural mechanisms of the human brain. Unlike conventional AI models, which activate extensive neural networks continuously, this revolutionary model selectively engages only the necessary neurons in response to specific inputs. This selective activation significantly saves power and accelerates response times, enabling the model to handle tasks with remarkable efficiency.
One of the most impressive claims surrounding SpikingBrain 1.0 is its ability to learn from a fraction of the training data traditionally necessary for similar systems, utilizing less than 2% of what mainstream AI models require. This efficiency is particularly evident when processing extensive texts, where SpikingBrain 1.0 reportedly operates up to 100 times faster than its conventional counterparts, as indicated by a non-peer-reviewed technical paper posted on arXiv, an open-access research repository.
Moreover, SpikingBrain 1.0 functions entirely within China’s domestic AI ecosystem, leveraging the MetaX chip platform instead of the widely used Nvidia GPU hardware. This development is of strategic significance, especially as the United States enforces tighter export controls on advanced AI chips, positioning this technology as a key player in the global AI landscape.
Li Guoqi, a lead researcher at the Institute of Automation, highlighted that this model represents a new frontier for AI development, specifically optimized for Chinese chips. He outlined the potential applications of SpikingBrain 1.0 to process extensive data sequences, such as legal documents, medical records, and scientific simulations, effectively signaling its versatility and relevance across multiple sectors.
In a bid to promote further exploration and use of this technology, Li’s team has open-sourced a smaller version of the model and made a more substantial version available online for public testing. On its demo site, the system introduces itself: “Hello! I’m SpikingBrain 1.0, or ‘Shunxi’, a brain-inspired AI model. I combine the way the human brain processes information with a spiking computation method, aiming to deliver powerful, reliable, and energy-efficient AI services entirely built on Chinese technology.”
In contrast to the prevalent AI models today, which often demand immense computing resources, SpikingBrain 1.0 unveils an energy-efficient alternative for model training and application. Companies typically lean on vast data centers filled with high-performance chips, leading to significant electricity and cooling expenses. Even after the initial training phase, these models continue to consume substantial resources, particularly during tasks that require extensive input or complicated responses.
SpikingBrain 1.0’s innovative approach diverges from this traditional methodology. By drawing on the selective processing capabilities of real neurons, this system doesn’t attempt to process every aspect of information simultaneously. Instead, it responds to provocations, yielding less power consumption while maintaining the ability to manage complex assignments, ultimately paralleling the effectiveness of human cognitive processes.
A remarkable feature of this AI model is its core technology known as “spiking computation,” which mirrors the brain’s tendency to send rapid bursts of signals only when stimulated. This event-driven mechanism inhibits unnecessary activation, allowing SpikingBrain 1.0 to remain quiet during inactive periods—reinforcing its energy efficiency and operational economy.
To substantiate their theories, the developmental team has created two iterations of SpikingBrain 1.0: one encompasses 7 billion parameters, while its larger version consists of 76 billion parameters. While the final stages of the research documentation are awaited, this ambitious project heralds an exciting advancement in AI technology that is anticipated to have far-reaching implications.
-
RSS co-creator launches new protocol for AI data licensing | TechCrunch
In an era where artificial intelligence is rapidly evolving, the issue of data licensing has become a hot topic. With significant cases pending in the AI landscape, including high-profile lawsuits involving major companies, a solution is critically needed. Enter Real Simple Licensing (RSL), a groundbreaking protocol launched by a coalition of web publishers and technologists, including Eckart Walther, one of the co-creators of the RSS standard.
The AI industry is facing an unprecedented challenge regarding the use of training data. Following Anthropic’s staggering $1.5 billion copyright settlement, organizations are increasingly aware of the potential for copyright lawsuits that could endanger innovation and advancement in AI. RSL aims to set a framework that ensures legal clarity and protects the rights of data providers while allowing AI creators to utilize content responsibly.
RSL is positioned as a significant advancement in the quest for a scalable and effective data licensing system. Backed by major web publishers such as Reddit, Quora, and Yahoo, the ambition is clear: facilitate agreements that meet the needs of both AI companies and content creators. This document outlines how RSL may operate at scale to foster a cooperative environment where both parties can thrive without the fear of legal repercussions.
At the heart of the Real Simple Licensing initiative is a technical framework designed to make licensing straightforward and machine-readable. By assimilating licensing terms within a website’s ‘robots.txt’ file, publishers can specify the conditions under which others can use their data. This method simplifies the complex web of licensing agreements, making it easier for AI firms to understand their obligations.
Legal infrastructure is just as vital to RSL’s framework. The establishment of the RSL Collective serves as a backbone for navigating licensing agreements and royalty collections. Comparable to ASCAP’s role in the music industry, the Collective provides a singular point of contact for managing multiple licensing agreements, thus alleviating potential confusion and ensuring that data providers are fairly compensated.
Numerous well-known entities have joined the RSL Collective, signaling a significant momentum behind this initiative. Publishers such as Yahoo, Medium, and The Daily Beast have signed on, contributing to a diverse array of content sources that can participate in the RSL ecosystem. Even as others choose to support the standard without joining the collective, the potential to standardize data licensing across the internet could reshape how AI companies approach data sourcing.
The implementation of RSL poses an essential question: will it be sufficient to foster buy-in from large AI companies? Engagement from major players in the AI field is critical if the protocol is to fulfill its promise. Walther emphasizes the growing urgency for clear, machine-readable licensing agreements, stating, “We need to have machine-readable licensing agreements for the internet. That’s really what RSL solves.” This impetus underscores the importance of creating a viable and fair system for all interested parties.
As the conversation surrounding AI data usage continues to evolve, RSL not only exemplifies a leading approach to data licensing but offers a much-needed template for future developments in AI governance. If successful, it could serve as a blueprint for other industries grappling with similar issues of copyright and content ownership.
This protocol highlights a pivotal moment for both the tech industry and data providers. There is a powerful incentive to develop standardized systems that protect intellectual property while simultaneously facilitating innovation in AI. As technology leaders converge at events like TechCrunch Disrupt 2025, discussions around initiatives like RSL serve to enrich the dialogue on balancing innovation with ethical and legal accountability in technology.
As the impending avalanche of copyright lawsuits looms, RSL shines as a beacon of hope for the future of data licensing in the AI space. Now, the industry awaits to see if this ambitious initiative will unite disparate stakeholders and provide the clarity needed to weather potential legal storms.
-
Streaml Launches AI-Powered End-to-End Sales Automation
In a significant move for the B2B sales and marketing landscape, Streaml has introduced an AI-driven Sales Development Representative (SDR) platform that could redefine how companies manage their sales workflows. Launched on September 09, 2025, in Nashville, TN, this innovative platform promises to streamline sales processes by managing sourcing, outreach, follow-up, and meeting scheduling all within a single intelligent system.
The impetus for the creation of Streaml stems from the frustrations of its founders, who recognized the limitations of outdated and fragmented sales development processes. By consolidating these once-disparate workflows into one cohesive solution, Streaml not only reduces operational costs but also enhances overall efficiency. The platform seeks to free up time for sales teams, allowing them to focus on high-value strategies and relationship building, which are crucial for sustained growth in a competitive marketplace.
In a statement from the founding team, they emphasized that “with Streaml, you don’t need a $100K agency contract. We handle everything—LinkedIn, email, SMS, social—so your team can spend time where it matters most: closing deals.” This approach targets a significant pain point for many companies, particularly those struggling to balance comprehensive outreach with the demands of nurturing leads into customers.
Streaml sets itself apart not just by being an all-in-one solution but through its key differentiators that enhance its appeal to businesses. Notably, its full-funnel coverage takes users from the initial contact through to signed contracts, making outreach and engagement seamless across the entire customer journey.
In addition to this streamlined approach, Streaml leverages proprietary datasets that extend beyond LinkedIn. With millions of high-quality leads at its disposal, the platform broadens a company’s reach and significantly improves targeting accuracy. This expansion into diverse industries allows for a customizable approach that can be tailored to a variety of business models and needs.
Moreover, one of the standout features of Streaml is its reliance on intelligent execution through AI agents. These agents prioritize and engage high-value prospects, which not only reduces the manual workload for teams but also sharply accelerates sales cycles. The reduction in administrative tasks means that sales personnel can invest their efforts where they truly matter: building relationships and converting leads into paying customers.
Streaml’s focus on cost efficiency is another crucial element that positions it well in the market. By consolidating the functionality of multiple tools and significantly reducing the need for external agencies, Streaml provides the equivalent value of a traditional $100K+ sales agency, but at a fraction of the cost. This budget-friendly option makes it especially attractive for small to mid-sized businesses looking to optimize their sales operations without exorbitant expenditures.
The initial results from companies adopting Streaml’s platform are promising. Firms across a range of sectors, including private equity, venture capital, recruiting, and B2B tech sales, have begun to see tangible improvements in their growth trajectories. For instance, the platform has been instrumental in onboarding researchers from multimodal dataset companies and CEOs of enterprise firms, accelerating early-stage traction and customer acquisition.
In the manufacturing and logistics sector, Streaml has successfully captured millions in B2B shipping and fulfillment deals by identifying eCommerce businesses actively searching for reliable shipping and fulfillment partners. By using advanced data analytics and AI reasoning, Streaml’s impact resonates clearly across industries, leading to increased sales efficiency and better alignment of resources with market demands.
Overall, Streaml is not just another sales tool; it’s a comprehensive solution designed to meet the challenges of modern B2B sales and marketing head-on. As companies continue to navigate the digital landscape, platforms like Streaml that offer seamless integration, cost savings, and intelligent automation will likely become essential for achieving long-term success.
