The Latest AI News

  • IBM Granite 4.0 : Smaller AI Model, Bigger Results, Slashes Memory & Latency

    Illustration

    In the rapidly evolving world of artificial intelligence, overcoming the constraints of size and speed while optimizing overall performance has always been a challenge. Fortunately, IBM has taken a significant step toward addressing these challenges with its latest innovation: Granite 4.0. This groundbreaking AI model not only captures the imagination with its promise of being smaller and faster but also emphasizes accessibility and data safeguarding, potentially revolutionizing AI deployment across various industries.

    Granite 4.0 represents a paradigm shift in how AI models operate, combining cutting-edge technology with practical uses. At its core, the innovation unveils a hybrid architecture featuring a potent combination of transformer and Mamba layers. This design allows the model to efficiently process large datasets—tackling long-context scenarios that would traditionally hinder performance. As a result, businesses in finance, healthcare, and research can expect to conduct operations with unmatched speed and accuracy, gaining a competitive edge in their respective fields.

    The implications of this new architecture are significant. Typically, advanced AI models require extensive computational resources, which can be a significant barrier for businesses, especially those operating within smaller budgets or limited technical capabilities. Granite 4.0 counters that notion by activating only 9 billion out of its available 32 billion parameters, drastically reducing memory usage while still outperforming larger models. This not only democratizes access to advanced artificial intelligence but also makes deployment feasible for even the most resource-conscious environments.

    Another standout feature of Granite 4.0 is its offline functionality, enabled by the integration with the Transformers.js library. This capability is paramount for industries where data security is non-negotiable, such as healthcare and finance. By allowing AI operations to continue without requiring constant internet connectivity, organizations can ensure the privacy and reliability of sensitive information, all while leveraging the power of AI to enhance their service delivery.

    Furthermore, Granite 4.0 has been meticulously designed to address security and compliance challenges that often accompany technology deployment. The incorporation of cryptographic signing and adherence to established standards, such as ISO 420001, ensures that organizations using Granite 4.0 are not only compliant but also safeguarded against potential vulnerabilities. This level of security is essential in regulated industries, where the stakes can be particularly high.

    The open-source nature of Granite 4.0 further underscores its commitment to broad accessibility. Developers are invited to explore this architecture, providing opportunities to customize and innovate AI solutions that match diverse application needs. As more developers join the movement, the shared contributions are likely to enhance the AI landscape, potentially leading to rapid advancements and groundbreaking applications across various sectors.

    As the landscape of AI continues to evolve, Granite 4.0 posits a compelling question: Could this be the tipping point that makes advanced AI tools a universal standard? The answer lies in its practical implications and the dynamic ways it allows businesses of all sizes to engage with AI technology. By creating a more user-friendly and resource-efficient model, IBM is making it not only about power but also about accessibility and usability.

    In conclusion, IBM’s Granite 4.0 stands to reshape the framework of AI deployment dramatically. By prioritizing efficiency, security, and accessibility, it paves the way for businesses to integrate advanced AI tools into their operations meaningfully. As organizations consider adapting to this new reality, there lies an opportunity to utilize AI not just as an abstract concept but as a fundamental driver of progress across various industries.


  • Anthropic’s $50B AI Data Center Bet

    Illustration

    In a bold move signaling the escalating race for AI supremacy, the startup Anthropic has unveiled an ambitious plan to invest $50 billion into building state-of-the-art data centers across the United States. This substantial investment not only highlights Anthropic’s commitment to strengthening its infrastructure but also reflects the increasing urgency among tech firms to bolster their AI capabilities amid fierce competition.

    The primary objective of this initiative is to develop facilities that cater specifically to Anthropic’s advanced cloud AI model needs. Collaborating with the infrastructure provider FluidStack, the company has earmarked Texas and New York as the focal points for these data centers, with aspirations for further expansion to other locations. This strategic placement underscores the importance of situating cutting-edge AI infrastructure in regions pivotal for technological innovation.

    Anthropic’s announcement emerges within a broader context of significant tech investments announced this year, as various companies pivot to fortify their AI presence in the U.S. This trend resonates with initiatives from prior administrations aimed at positioning the United States as the world leader in artificial intelligence. For instance, in January, an AI Action Plan was established under the previous administration’s guidance, reinforcing a commitment to domestic AI growth.

    The AI Action Plan’s overarching vision is to solidify the U.S. as the “world’s AI capital.” This initiative gained traction during the Trump Tech and AI Summit in July, where American firms unveiled substantial AI and energy investment commitments to support this vision. Anthropic’s investment aligns perfectly with these objectives, further energizing the national push towards AI leadership.

    As part of the anticipated impact from their new data centers, Anthropic projects that approximately 800 permanent jobs will be created along with about 2,400 construction-related roles during the development phase. These job opportunities present a significant boost to the economy, combining both immediate construction roles and sustainable positions that will contribute to the operational landscape of AI.

    Expected to be operational by 2026, these data centers are set to play a crucial role not only in advancing Anthropic’s AI capabilities but also in enhancing the overall landscape of cloud computing across the nation. As the demand for AI technologies continues to rise, the establishment of these dedicated facilities signifies a major step forward for both Anthropic and the industry.

    Moreover, with backing from industry giants like Google’s parent company Alphabet and Amazon, Anthropic’s strategic moves indicate a solidified foundation for its ambitious projects. Earlier this year, the startup was valued at approximately $18.3 billion, emphasizing the growing confidence investors have in Anthropic’s vision and future potential. This ambitious venture serves as a catalyst for innovation, potentially setting new benchmarks in the AI field.

    As the landscape of AI rapidly evolves, Anthropic’s $50 billion infrastructure investment could redefine the dynamics of AI development and cloud capabilities in the United States. The company’s initiative not only echoes the sentiments of a technological renaissance but also asserts its readiness to compete in the higher echelons of AI innovation. This investment marks a crucial moment in the ongoing evolution of artificial intelligence, paralleling broader trends within the tech industry that seek to unify and expand the capabilities of AI technologies.


  • ByteDance unveils China’s most affordable AI coding agent at just US$1.30 a month

    Illustration

    In an exciting development for the tech industry, ByteDance, the parent company of TikTok, has debuted a groundbreaking AI coding agent capable of revolutionizing the way developers approach coding tasks. Priced at a mere 9.9 yuan (approximately US$1.30) for the first month, this new tool signifies a significant leap forward in the competitive world of AI developer tools in China.

    The newly launched Doubao-Seed-Code model, which will follow up with a standard monthly fee of 40 yuan (around RM23), was revealed on November 11, coinciding with China’s Singles’ Day shopping festival. This timing not only helps capture the attention of consumers but also aligns with the peak shopping season, amplifying its promotional reach.

    ByteDance’s recent expansion into artificial intelligence showcases its commitment to leading the technology sector. During a corporate event held in October, Volcano Engine president Tan Dai reported that the use of ByteDance’s Doubao chatbot has seen a staggering doubling effect over the past six months, indicating rapid adoption among Chinese consumers who are eager for effective AI solutions.

    Remarkably, the Doubao-Seed-Code model has entered the competitive landscape with impressive credentials. It performed exceptionally well on the SWE-Bench Verified test, achieving scores that put it on par with established AI systems such as Anthropic’s Claude Sonnet. This achievement not only underscores ByteDance’s technical capabilities but also enhances its standing in the global AI arena.

    In a market increasingly defined by geopolitical nuances, this launch comes at a critical moment. After US-based Anthropic recently revised its service restrictions to exclude access for Chinese subsidiaries, the introduction of Doubao-Seed-Code demonstrates ByteDance’s intent to maintain robust offerings amidst tightening global restrictions. By providing an advanced and affordable tool, the company is positioning itself to cater to the increasingly important Chinese technological landscape.

    Volcano Engine’s Doubao-Seed-Code stands out with its unique compatibility features, supporting a range of popular development tools including veCLI, Cursor, and Cline. Its capacity for integration with APIs, such as those from Anthropic, potentially expands its usability and appeal among developers looking for versatile coding solutions.

    With the capacity to process up to 256,000 words per query, Doubao-Seed-Code is particularly well-equipped to tackle complex codebases, a significant advantage for full-stack application developers. This capability represents a fresh offering in a market eager for tools that can streamline workflows and enhance productivity.

    ByteDance’s new coding model is the result of a large-scale, agent-intensive training system tailored to meet the demands of modern coding challenges. Furthermore, integration into the Trae coding app is noteworthy, being accomplished just six days after ByteDance made a strategic decision to sever access to Anthropic’s Claude models. This rapid development highlights the company’s agility and responsiveness in a fast-paced industry.

    In terms of performance metrics, Doubao-Seed-Code achieved a remarkable 78.8% score on the SWE-Bench Verified benchmark, illustrating that affordability does not come at the expense of quality or effectiveness. This achievement positions it as a strong competitor in the AI landscape, particularly appealing to cost-conscious developers.

    The release of Doubao-Seed-Code is part of a wider trend observed among Chinese tech firms, where rapid advancements in AI technology have led to the launch of various new models. Recently, additions like Moonshot AI’s Kimi K2 and MiniMax’s M2 have drawn significant attention, enhancing China’s reputation as a burgeoning centre for AI innovation.

    As the competition intensifies in the AI developer tools market, ByteDance’s Doubao-Seed-Code represents a compelling tool for business leaders, product builders, and investors seeking to leverage cutting-edge technology in their strategies. With its combination of affordability, technical prowess, and strategic timing, this new coding agent could well redefine coding practices and enhance productivity across the board.


  • GE HealthCare and RadNet’s DeepHealth Division Sign Letter of Intent to Advance Innovation and Adoption of AI-Powered Imaging Across Multiple Modalities and Remote Scanning

    Illustration

    In a significant move towards revolutionizing healthcare imaging, GE HealthCare has announced a collaborative effort with DeepHealth, a subsidiary of RadNet, to leverage artificial intelligence across multiple imaging modalities. This partnership, which was officially unveiled on November 12, 2025, aims not only to enhance breast cancer care but also to integrate new technologies into ultrasound imaging, thereby optimizing clinical workflows and improving patient outcomes.

    The collaboration builds on the foundation laid in 2024 when both companies previously joined forces to combine GE HealthCare’s advanced mammography system, Senographe Pristina™, with DeepHealth’s AI-driven Breast Suite. This initiative significantly improved image interpretation and operational efficiency, empowering healthcare providers with the tools they need to deliver precise care. As a natural progression of their successful collaboration, GE HealthCare and DeepHealth plan to broaden their focus to encompass a global distribution arrangement for these innovative solutions.

    Central to this initiative is the ambition to expand access to breast cancer solutions worldwide. It reflects a growing recognition of the need for improved screening technologies, particularly in underserved regions, where expert care is often inaccessible. By making their technologies available in diverse settings, the partners intend to impact patient care on a broader scale.

    Furthermore, the partnership will introduce GE HealthCare’s ultrasound imaging capabilities into DeepHealth’s AI-powered Thyroid Suite. This integration aims to enhance intelligent clinical decision support and streamline reporting automation, which is crucial for efficient patient management in thyroid diseases. Such technological advances not only aid in improving diagnostic accuracy but also empower clinicians to make quicker, more informed decisions, thus directly influencing patient care and outcomes.

    The introduction of TechLive™, DeepHealth’s remote scanning solution, promises to simplify complex workflows associated with ultrasound diagnostics. By integrating this tool with GE HealthCare’s extensive ultrasound portfolio, the initiative aims to facilitate remote connectivity and enhance access to expert care—from small clinics to larger hospitals. This will not only support clinicians in their diagnostic efforts but also ensure that patients receive timely and accurate results.

    The strategic importance of this collaboration cannot be overstated. With healthcare increasingly adopting digital solutions, the move towards AI-enhanced imaging systems represents a substantial shift that aims to improve efficiency and accessibility in clinical practice. As noted by Karley Yoder, CEO of Comprehensive Care Ultrasound at GE HealthCare, this partnership underscores the synergy between intelligent technology and collaborative strategies in advancing precision care. Yoder expressed excitement about incorporating DeepHealth’s offerings into their ultrasound solutions, emphasizing the potential for faster and more reliable clinical decisions.

    DeepHealth’s CEO, Kees Wesdorp, echoed these sentiments, stating that this partnership is set to establish a new standard in AI-powered healthcare. The combined efforts will pave the way for better clinical outcomes and enhance the overall patient experience, which is critical in today’s healthcare landscape where patient-centric models are gaining prominence. The utilization of AI in medical imaging not only promises to enhance the accuracy of diagnoses but also aims to streamline the workflows within healthcare facilities, thereby allowing healthcare professionals to focus more on patient care rather than administrative tasks.

    As the world continues to grapple with healthcare challenges, particularly in areas such as cancer detection and treatment, innovations like those being developed through this collaboration will be vital in addressing these issues. The commitment to infuse AI capabilities into everyday healthcare practices signals a future where technology and patient care go hand in hand, improving lives on a global scale.

    Through their expanded collaboration, GE HealthCare and DeepHealth are working towards a vision where advanced imaging solutions are accessible to all, heralding a new era of AI-enhanced healthcare that prioritizes patient welfare and operational excellence.


  • How Japanese banking giant MUFG is using AI

    Illustration

    Mitsubishi UFJ Financial Group (MUFG), one of Japan’s largest banking institutions, is undergoing a significant transformation by integrating artificial intelligence (AI) across all facets of its operations. This initiative is not merely about digitizing existing processes but aims to revolutionize the organizational structure by adopting AI agents to work alongside human employees.

    Morito Emi, the head of MUFG’s digital strategy division, articulated this vision during the MUFG Fintech Festival held in Singapore. He emphasized that the goal is to evolve MUFG into an AI-native company, where AI plays a central role in the execution of various tasks and functions within the bank.

    Despite deploying AI tools throughout the organization, Emi noted that only about half of the employees regularly engage with these technologies. To bridge this adoption gap, MUFG is intensifying its focus on three essential strategies: promoting company-wide AI training, establishing a robust architecture for AI agents, and redefining its data strategy to enhance accessibility for these AI systems.

    Launched in July 2025, the “Hello AI @ MUFG” campaign aims to embed AI into the company culture. This initiative kicked off with a prompt challenge, where over 6,000 employees competed to create the most effective generative AI instructions, showcasing the bank’s commitment to fostering an AI-centric workforce.

    Collaboration is a pivotal part of MUFG’s approach to enhancing its AI capabilities. In October 2024, the bank partnered with OpenAI to provide its employees access to ChatGPT Enterprise. This partnership is intended to facilitate the development of advanced AI applications and ensure dedicated support from industry experts.

    Moreover, MUFG established a significant three-year partnership with Sakana AI, a rapidly growing Japanese AI startup that achieved unicorn status within a year. This alliance focuses on refining decision-making processes, with their inaugural project set to produce an AI-powered loan expert, designed to provide better recommendations and adapt through human feedback.

    Currently, MUFG is executing approximately 60 advanced AI use cases within its operations, which are projected to save around three million work hours annually. For instance, one innovative application generates proposals for corporate clients, allowing bankers to dedicate more time to relationship-building activities.

    Another noteworthy integration is the AI-based mergers and acquisitions matching function, which assists analysts in identifying potential acquisition targets that might be overlooked. Meanwhile, the bank’s trust banking division employs an AI document reader that extracts information from intricate legal and financial documents, thereby saving thousands of hours that staff would otherwise spend manually gathering data.

    Looking ahead, MUFG has ambitious plans to enhance its offerings further, including a goal to triple the number of online small-business loans serviced by fiscal year 2026. This will be achieved through the implementation of AI for credit analysis, streamlining the approval process, and increasing accessibility for small enterprises.

    In conclusion, MUFG’s strategic pivot towards becoming an AI-native organization illustrates its commitment to not only enhancing operational efficiency but also transforming the role of AI in banking. By prioritizing employee training, fostering collaborative partnerships, and embedding cutting-edge technology into its business model, MUFG is setting a paradigm for how financial institutions can leverage AI to achieve significant advancements.


  • China’s leading online travel platform, Alibaba-owned Fliggy, prepares for ‘omni-intelligent travel agents’ future by placing AI at centre of strategy

    Illustration

    The travel industry is undergoing a seismic shift, and Alibaba-owned Fliggy, one of China’s premier online travel platforms, stands at the forefront of this transformation. By adopting an innovative multi-agent approach, Fliggy is transitioning from traditional online travel agencies (OTAs) to the next generation—what it terms ‘omni-intelligent travel agents.’ This shift aims to harness the power of artificial intelligence not only to enhance user experiences but also to revolutionize how travel planning is conducted.

    The cornerstone of Fliggy’s strategy is its commitment to AI innovation. Throughout the year, the platform has rolled out a series of AI-powered products designed for both individual consumers and businesses. This robust AI ecosystem allows multiple AI agents to collaborate, providing users with sophisticated travel planning capabilities that mimic those of seasoned travel consultants. This effort commenced in April with the introduction of ‘AskMe,’ a smart AI travel assistant that has received significant upgrades since its debut.

    The AskMe assistant is particularly noteworthy, enabling users to interact with various specialized AI agents for a personalized travel planning experience. Its features have expanded to include itinerary maps, popular destination heat maps, and innovative photo-based audio guides introduced in September. These enhancements reflect Fliggy’s relentless pursuit of providing seamless and tailored travel solutions to its users.

    Fliggy’s business travel segment, known as AliBtrip, is not lagging behind. The division has released an AI tool designed to streamline business travel planning. This includes an employee travel agent for personalized itineraries and a corporate management agent to assist in backend processes, ensuring that all operations comply with regulations. This integration of AI into both consumer-facing and B2B products highlights Fliggy’s versatility as a tech-driven travel platform.

    International collaboration is also key to Fliggy’s strategy. The company has partnered with various European organizations to incorporate its AI-driven offerings to cater to the increasing number of Chinese tourists traveling abroad. These partnerships promise to enhance visitor experiences by providing accurate AI-generated interpretations and materials that help communicate local cultures and attractions effectively.

    “At Fliggy, we believe the future lies not with traditional online travel agencies (OTAs), but with omni-intelligent travel agents (OTAs)—these will be the new ‘OTAs.’” – Dr. Alex Chen, Chief Technology Officer at Fliggy

    This vision underscores Fliggy’s ambition to lead the future of travel through comprehensive AI utilization. As of late March, an impressive 10 percent of customer inquiries on the platform were already managed by AI, showcasing its efficacy in improving operational efficiency. Moreover, tools like the AI publishing tool, which enables travel agencies to transform itineraries from Word documents to publishable formats in just 60 seconds, dramatically speed up the process of product listing. This has been shown to enhance efficiency by reducing turnaround times by a factor of 3.5 and automating inventory management.

    On the consumer interaction front, AskMe’s unique design empowers users to engage multiple specialized AI agents simultaneously for various needs—ranging from flight searches to local tour guidance. As users navigate their travel plans, they benefit from dedicated assistants that enhance the overall experience, ensuring that no detail is overlooked. Such innovations not only improve user satisfaction but also represent a significant leap forward in how technology can personalize and optimize the travel experience.

    With these developments, Fliggy is not just adapting to changes in the travel landscape; it is actively shaping them. The platform’s investment in AI-driven solutions is indicative of a broader trend in the industry, where technology will increasingly dominate and define travel experiences. As other players in the market watch closely, Fliggy’s path toward becoming an omni-intelligent travel agent serves as a compelling blueprint for success in the rapidly evolving world of travel.


  • AI scams fuel rise in fake online car sales. How California is trying to protect consumers.

    Illustration

    The landscape of car buying is rapidly evolving as more consumers turn to online marketplaces for purchasing used cars. While convenience is at the forefront, a troubling surge in artificial intelligence-powered scams threatens to undermine this transition. A recent report by AuthenticID reveals that nearly 5% of automotive transactions are now fraudulent, underscoring the pressing need for consumers to be vigilant in their online searches.

    In a striking example of this emerging crisis, 18-year-old Andrew Arenas was recently caught in a harrowing scenario that illustrates the dangers of fake online car sales. Returning home in January 2024, Arenas was unexpectedly detained by law enforcement who informed him that the car he purchased was reported stolen. Despite having the title and registration from the DMV, he was handcuffed on the asphalt, an experience he describes as surreal and shocking.

    With the listing for his car still visible on Facebook Marketplace, Arenas’s story serves as a stark reminder that the allure of a great deal can quickly turn into a nightmare. Consumer advocate Rosemary Shahan highlights that online car scams are becoming increasingly sophisticated, with AI playing a crucial role in enabling criminals to bypass traditional safety measures.

    “AI takes it to a whole new level,” she asserts, revealing how technology can be misused to create convincing counterfeit car titles. The implications of such technology are staggering; imagine a scenario where buyers invest substantial sums into vehicles that do not have legitimate ownership status.

    This emerging issue raises critical questions about the responsibility of tech companies and online platforms in safeguarding consumers. Paul Taske, a lawyer representing NetChoice, which advocates for the tech industry, argues that legislation intended to require online marketplaces to track high-volume sellers can be overly burdensome. He believes that effective consumer protection should focus on tackling the perpetrators of these frauds rather than placing restrictions on platforms.

    Nevertheless, California Attorney General Rob Bonta points out that online marketplaces have the unique opportunity to leverage AI as a tool for consumer protection, rather than allowing it to be a vehicle for scams. He posits that AI can be employed to detect fraudulent activities by identifying unusual selling patterns and flagging suspicious transactions for further investigation.

    Advocates like Shahan hope that lawmakers take heed of these insights, emphasizing the necessity for robust regulatory measures to protect consumers. This sentiment is particularly poignant in the light of Arenas’s experience, who warns that what might seem like a “perfect” car could in fact be a meticulously crafted fake.

    Resolute in his intention to raise awareness, Arenas’s story serves not only as a cautionary tale for prospective car buyers but also as a call to action for both tech companies and lawmakers to devise strategies that integrate advanced technologies for consumer safety. The landscape of online car buying is shifting, and it is paramount that both buyers and sellers navigate this new terrain with care and caution.

    In the wake of genuine cases like that of Andrew Arenas, the need for improved frameworks that account for the realities of AI misuse becomes ever more pressing. A commitment to consumer advocacy, innovation in fraud detection, and a collaborative approach from stakeholders can pave the way toward a safer online marketplace.


  • CoreWeave beats third-quarter revenue estimates on AI computing boom

    Illustration

    The contemporary landscape of artificial intelligence and cloud services continues to evolve at a breathtaking pace, with companies like CoreWeave emerging as key players in this arena. Recently, CoreWeave, which has pivoted from Ethereum mining to AI cloud computing, reported significant financial results that underscore its growing importance in the tech sector. Their third-quarter revenue soared over twofold, reaching $1.36 billion, which surpassed analysts’ expectations of $1.29 billion, highlighting their ability to capitalize on the surging demand for AI services.

    Despite the impressive revenue growth, the company faced challenges that prompted a revision of its annual revenue forecast. Chief Financial Officer Nitin Agrawal announced that projections for 2025 revenue would be between $5.05 billion and $5.15 billion—down from an earlier estimate of $5.15 billion to $5.35 billion. This adjustment was primarily due to a delay with a third-party data center partner, impacting market sentiment and resulting in a more than 10% drop in the stock price during early trading.

    CoreWeave’s recent announcements highlight their strategic position in the AI landscape, especially given their lucrative agreements with industry titans. Notably, the company secured a $14 billion contract with Meta Platforms and a $6.5 billion partnership with OpenAI, further affirming its status as an essential infrastructure provider amid the accelerating demand for AI-powered graphics processing units (GPUs). These partnerships not only reflect confidence in CoreWeave’s capabilities but also serve as a testament to the burgeoning market for AI technology.

    Looking ahead, Agrawal indicated that capital spending is projected to more than double compared to 2025 levels, with an expected investment of between $12 billion and $14 billion. This suggests that CoreWeave is prepared to expand its cloud computing capabilities significantly, critical as competition in the AI space intensifies. The transition from a cryptocurrency mining operation to a cloud computing powerhouse is a remarkable reinvention, demonstrating CoreWeave’s adaptability and foresight in a rapidly changing market.

    However, this aggressive growth strategy is not without its challenges. The company’s adjusted operating income margin fell to 16% in the latest quarter, down from 21% the previous year. Market pressures, including rising prices for AI-specific chips, increasing competition for scarce computing resources, and the high costs associated with expanding cloud infrastructure, pose risks to maintaining profitability.

    Despite these challenges, CoreWeave’s stock performance remains noteworthy, having more than doubled since going public earlier this year at a valuation of $40 per share. Today, the company boasts a market capitalization exceeding $50 billion, a remarkable feat that emphasizes the faith investors have in its transformative approach to AI and cloud services.

    As the AI revolution accelerates, companies like CoreWeave are demonstrating that there is substantial business value in harnessing advanced computing technologies. The ongoing collaboration with eminent firms not only signifies CoreWeave’s critical role in the AI supply chain but also highlights the ever-increasing need for robust and responsive cloud platforms that can handle the complexities of AI workloads.

    In summary, while CoreWeave navigates the challenges posed by partner delays and a dynamic competitive landscape, their achievements in revenue growth and strategic partnerships reflect an optimistic outlook for the future. The company’s ability to adapt to market demands and commit significant capital toward expansion positions it well within the thriving AI sector.


  • AI is rewriting how software is built and secured

    Illustration

    The integration of Artificial Intelligence (AI) into software development has fundamentally transformed the landscape of coding, leading to both innovative advancements and complex challenges. A recent report from Cycode, titled The 2026 State of Product Security for the AI Era, highlights the pervasive role AI plays in development pipelines and the consequential security risks that organizations face as they adapt to new methodologies.

    According to a comprehensive survey of 400 Chief Information Security Officers (CISOs), Application Security leaders, and DevSecOps managers across the United States and the United Kingdom, AI-generated code has embedded itself in every participating organization. Remarkably, nearly all respondents reported either using or pilot-testing AI coding assistants, indicating a significant leap in the adoption of AI technologies within the software development framework.

    Despite the widespread use of AI in coding, a staggering 97 percent of organizations acknowledged that AI-generated code is now present in their production environments, yet only 19 percent claim to have complete visibility into the extent and manner of AI utilization. This massive blind spot presents critical challenges as many security leaders express heightened concerns that their overall risk profile has escalated with the introduction of AI tools.

    Particularly concerning is the phenomenon of shadow AI, where employees independently incorporate unauthorized AI tools, plugins, and procedural protocols without institutional oversight. The implications of this trend are severe, as unregulated AI tools can process sensitive data and operate outside traditional security mechanisms, ultimately expanding the attack surface for potential breaches.

    As organizations grapple with these challenges, over half the survey respondents pinpointed the usage of AI tools and exposure of the software supply chain as significant risk factors. Each AI model or integration can function similarly to a supplier with ambiguous origins, eroding confidence in product integrity when oversight is lacking. The report underscores that simply safeguarding the code is insufficient; organizations must also actively manage the systems and data pipelines generating the code to ensure comprehensive security.

    Visibility and governance emerged as critical areas needing urgent attention. A mere 19 percent of organizations report robust visibility into their AI usage across development, while many rely on informal and fragmented governance processes. This gap invites oversight and accountability issues, leaving organizations vulnerable to threats stemming from invisible AI operations.

    To address these mounting concerns, product security teams are assuming newfound responsibilities in governance and compliance. More than half are now navigating regulatory obligations, leading some to implement AI bills of materials. These documents serve to meticulously catalogue models, datasets, and dependencies, thereby fostering transparency concerning AI components. This initiative builds upon the existing concept of the software bill of materials but adapts it to meet the complex needs of AI integration.

    Furthermore, research suggests that if organizations do not bolster their governance frameworks, they risk perpetuating inconsistencies and operational duplications similar to those that previously led to significant breaches within supply chains. As the industry marches toward 2026, ensuring rigorous visibility, accountability, and oversight for AI-generated code will be pivotal for fostering a secure and resilient software development environment.

    As AI continues to redefine how software is constructed and secured, business leaders and security teams must proactively adapt to these rapidly changing dynamics. The journey involves not only leveraging AI for productivity gains but also understanding and managing the inherent awareness and risks that accompany its adoption.


  • Hard drives on backorder for two years as AI data centers trigger HDD shortage — delays forcing rapid transition to QLC SSDs

    Illustration

    The race to achieve artificial general intelligence (AGI) has instigated a relentless push in the tech sector, prompting significant investments in building data centers. This development is advancing at a rate that far exceeds the capacity of manufacturers to deliver essential components. Not only are hard disk drives (HDDs) affected, but the ongoing DRAM shortage has exacerbated the situation, with memory kits seeing their prices soar to more than double compared to just a few months ago.

    According to recent reports by DigiTimes, delivery times for enterprise-grade HDDs have now escalated to a staggering two-year backlog. This situation presents a formidable challenge for firms in need of large-capacity hard drives, which are crucial for nearline storage solutions. With AI funding driving the market, hyperscalers—large-scale cloud service providers—are being forced to pivot rapidly, opting for QLC NAND-based solid-state drives (SSDs) to circumvent these extensive backorders.

    The transition from traditional HDDs to QLC SSDs—which utilize quad-level cell technology—allows these companies to manage costs while still ensuring sufficient data storage endurance suitable for cold storage needs. However, this shift is not without its complications. As cloud providers rush to acquire QLC NAND, a new shortage may emerge, consequently driving up prices. The indirect ripple effects of this demand increase might soon manifest as a rise in SSD prices globally, particularly since most value-oriented storage models lean towards QLC to conserve costs.

    DigiTimes highlights a concerning forecast: production capacity for QLC is fully booked through 2026 at several NAND manufacturers. This indicates that firms aiming for expansion in AI capabilities are scooping up available supplies, further complicating the supply landscape. Notably, the popularity of QLC NAND may well surpass that of triple-level cell (TLC) technology by early 2027, signaling a pivotal change in the data storage environment.

    Recent reports have revealed that major manufacturers, including Sandisk, have already increased NAND prices by 50%—a sharp rise that follows an earlier announcement of a 10% increase only two months prior. These sudden price hikes have left many in the industry reeling, while other companies have registered extraordinary profit margins, a stark deviation from past years’ challenges.

    Looking at the broader picture, the unexpected scarcity of memory and storage resources can largely be attributed to AI ambitions unleashed by the wealthiest tech giants. This urgency and unpredictability have serendipitously brought forth unforeseen consequences in supply chains. In the span of a few short weeks, industry insiders and analysts are coming to grips with the pressing reality that manufacturers previously equipped with buffer capacities of 2-3 months are now struggling with reduced availability, often limited to just a few weeks.

    As tech companies grapple with these shifts, the consumer market is feeling the effects of the electronics scarcity yet again. The rapid transition to QLC SSDs is a necessary adaptation; however, it raises questions about the sustainability of supply and pricing trends going forward. As firms navigate uncharted territories within this evolving landscape, the need for innovative solutions in both product design and manufacturing processes has never been more critical. This need goes hand-in-hand with considerations toward long-term strategies to balance consumer demand without further aggravating scarcities.

    In conclusion, the dual crises of HDD backorders and DRAM shortages are pushing enterprises toward an evolution in data storage technology that could reshape market dynamics. Companies that strategically navigate this challenging landscape may emerge with enhanced capabilities, while those that fail to adapt may find themselves stranded in a turbulent market.