The Latest AI News

  • Japan’s AI Demand Will Increase 320x by 2030, Industry Leader Says at NVIDIA AI Day Tokyo

    Illustration

    During NVIDIA AI Day Tokyo, a crucial event for AI enthusiasts, industry leaders gathered to discuss the transformative landscape of artificial intelligence in Japan. With over 900 attendees, the conference featured extensive sessions covering a range of topics, from agentic and physical AI to the exciting capabilities of quantum computing and the rise of AI factories.

    One of the standout presentations came from Kuniyoshi Suzuki, the senior director of the cloud AI service division at SoftBank Corp. He pointed to a staggering forecast for Japan’s AI computing power demand—a 320-fold increase from 2020 levels by the year 2030. This prediction underscores the urgency for infrastructure and technological advancements as businesses and industries gear up for an AI-driven future.

    The implications of such a significant increase in demand are profound. It suggests a seismic shift in how businesses operate, requiring them to rethink their IT strategies and embrace innovative AI solutions. The rise of AI is no longer a matter of theoretical possibilities, but an imminent reality that organizations must prepare for.

    To meet this burgeoning demand, industry players like SoftBank, GMO Internet, and KDDI showcased their latest advancements at AI Day Tokyo. Each of these companies is at the forefront of developing AI technologies, illustrating their commitment to building robust ecosystems that empower developers in creating AI models and services. The collaboration between these organizations not only enhances technological capabilities but also focuses on ensuring safety and transparency in AI adoption.

    One crucial point raised by Suzuki was the necessity for Japan to build a foundation of domestic technologies. He emphasized the importance of not only developing high-performance, Japan-made large language models but also establishing a large-scale domestic computing infrastructure capable of sustaining continuous development of these models. This approach is essential for fostering innovation while mitigating potential risks associated with data privacy and security.

    The significance of establishing a self-sufficient AI ecosystem cannot be overstated. It allows Japan to safeguard its technological sovereignty and provides businesses with the tools necessary to harness AI effectively. In a world where reliance on foreign technologies can pose risks, developing homegrown solutions creates a more stable and autonomous technological environment.

    Moreover, the discussions at AI Day Tokyo reflect a broader global trend toward embracing AI in various sectors, including healthcare, finance, and manufacturing. By prioritizing AI development, Japan is not only positioning itself as a leader in the technology space but also enhancing its competitive edge on the global stage.

    Looking ahead, the path toward achieving this ambitious forecast involves concerted efforts across multiple fronts. Companies will need to invest significantly in research and development, infrastructure, and talent acquisition to ensure they remain leaders in the AI race. Collaboration between academia, industry, and government will be essential to create supportive policies and frameworks that promote innovation.

    In conclusion, the insights shared at NVIDIA AI Day Tokyo highlight the monumental shift anticipated in Japan’s demand for AI technologies by 2030. With predictions of a 320-fold increase in AI computing power, the stakes are high for businesses to adapt and innovate. The emphasis on building a domestic foundation for AI technologies represents a strategic move towards ensuring Japan remains at the forefront of the AI revolution. As organizations continue to navigate this evolving landscape, the need for vigilance and proactive development will be paramount to thrive in an increasingly AI-centric world.


  • New AI Tool Finds Hidden Brain Lesions That Doctors Miss in Children With Epilepsy

    Illustration

    An innovative artificial intelligence (AI) tool, recently developed by Australian researchers, is paving the way for quicker and more accurate diagnoses for children suffering from epilepsy. This critical advancement in the medical field addresses a longstanding challenge: detecting tiny, often elusive brain lesions that traditional imaging methods frequently overlook. The breakthrough was announced on a Wednesday, with researchers highlighting its potential to significantly improve patient outcomes.

    Epilepsy can arise from various causes, with structural abnormalities in the brain accountable for about one in three cases, according to medical experts. These abnormalities are often not visible on standard MRI scans, particularly the smallest lesions that may be hidden within the folds of the brain. The researchers have taken a significant step towards overcoming this limitation, stating that their AI tool can detect lesions the size of a blueberry or smaller—something that has been a crucial barrier for surgical intervention.

    At the helm of the research is Emma Macdonald-Laurs, a pediatric neurologist from the Royal Children’s Hospital in Melbourne. She emphasizes that this AI tool does not intend to replace the expertise of radiologists and neurologists. Instead, it acts as an adjunctive aid, likening its function to that of a detective—helping healthcare professionals to piece together the complex puzzle of diagnosis more efficiently. This advancement aims to help many children who have previously been overlooked as surgical candidates due to missed abnormalities in their brain scans.

    The study reported impressive outcomes from a cohort of patients suffering from conditions like cortical dysplasia and focal epilepsy. Notably, around 80 percent of these children had previously been labeled as having normal MRI scans. However, when the AI tool was applied to analyze both MRI and PET scans, it demonstrated a remarkable success rate: 94 percent in one test group and 91 percent in another. Among the 17 children in the first group, 12 underwent surgical procedures to remove their brain lesions, resulting in 11 children achieving freedom from seizures post-operation.

    Macdonald-Laurs’ team, associated with the Murdoch Children’s Research Institute, expressed optimism regarding the further application of this technology. Their next step will involve testing the AI detector in real-world hospital settings on patients who are yet to receive any diagnosis. This transition from a controlled research environment to practical application will help validate the tool’s efficacy in everyday medical practice.

    The implications of this technology are profound, especially considering that epilepsy affects approximately one in 200 children, with about one-third of these cases proving resistant to standard drug treatments. The growing success of AI in diagnostics, such as this tool, represents a compelling shift towards more intelligent, data-driven approaches in healthcare. Experts like Konrad Wagstyl, a biomedical computing specialist at King’s College London, applaud this research as a promising proof of concept, noting the “really impressive” results yielded.

    Interestingly, this AI initiative is part of a larger trend wherein machine learning algorithms are deployed to interpret medical imaging data. Similar studies, including work accomplished by Wagstyl’s team, noted that AI systems successfully identified 64 percent of epilepsy-related brain lesions that had previously gone undiagnosed by human radiologists. It is evident that AI is not only augmenting diagnostic capabilities but is reshaping the very framework of medical imaging.

    Despite its advantages, the study does note a few caveats. The use of PET scans, while beneficial, comes with concerns regarding cost-effectiveness and the degree of radiation exposure, similar to that of CT scans or X-rays. The researchers urge caution and recommend further exploration into more accessible imaging technologies. As the field continues to evolve, the potential for similar AI tools could herald a new era of healthcare where early diagnosis and intervention become more routine, ultimately leading to better health outcomes for children suffering from epilepsy and beyond.


  • Mureka V7.5 Makes History With the World’s First Fully AI-Generated Song

    Illustration

    The advent of artificial intelligence (AI) in creative industries has transformed the landscape of music production, significantly altering how songs are created and experienced. Mureka V7.5 has made headlines as it introduces the world’s first fully AI-generated song, a significant milestone that marks a new era in music technology.

    This groundbreaking development showcases the immense potential of AI in areas traditionally dominated by human artistry. By leveraging advanced algorithms and machine learning techniques, Mureka V7.5 can compose music without any human intervention. The AI analyzes countless patterns, genres, and styles to produce compositions that can resonate emotionally with listeners.

    Unlike previous attempts at AI music creation, which often relied on simple algorithms or pre-existing templates, Mureka V7.5 employs sophisticated neural networks to generate unique sounds, harmonies, and lyrics. This advancement signifies a leap toward more authentic and intricate AI-generated music that challenges the notion of creativity and authorship in the music industry.

    The significance of this achievement goes beyond mere technological novelty. The implications for artists, musicians, and producers are profound. As AI tools become more sophisticated, they may serve as collaborators rather than mere assistants, enabling musicians to explore new sonic territories while potentially enhancing their creative processes.

    Moreover, the commercial potential is extensive. Music production costs could decrease significantly, allowing independent artists with limited budgets to create high-quality tracks. It opens doors for personalized music experiences, where AI-crafted songs can be tailored to individual preferences and moods, enhancing user engagement across streaming platforms.

    However, the rise of AI in music presents challenges, particularly regarding copyright issues and the role of human creativity. As songs generated by AI become more mainstream, questions will inevitably arise about who holds the rights to these compositions and how they should be credited.

    The entertainment industry is already witnessing a significant shift in the way music is produced and consumed. Major record labels are exploring partnerships with AI technology developers to integrate these tools into their production processes. This collaboration could lead to the emergence of hybrid models, where human creativity is augmented by the analytic precision of AI.

    In conclusion, the introduction of Mureka V7.5 and its fully AI-generated song marks a watershed moment in the intersection of technology and the arts. As AI continues to evolve, it may redefine not just how music is made, but also the very essence of creativity itself. Music creators and industry leaders must navigate this new landscape with caution while embracing the innovative opportunities it presents.


  • Google Stax Aims to Make AI Model Evaluation Accessible for Developers

    Illustration

    In a world increasingly driven by artificial intelligence, the need for effective model evaluation has never been more pronounced. Google Stax emerges as a pivotal framework designed to transform how AI developers assess the quality of their models. By replacing subjective evaluations with a data-driven and repeatable methodology, Stax aims to empower developers to customize their evaluation processes to fit specific use cases, departing from reliance on generic benchmarks.

    Evaluation is crucial in the AI domain, as it directly influences the selection of the right model for any given task. Google emphasizes that quality assessment, latency considerations, and cost-effectiveness are vital parameters that must be compared to make informed decisions. Furthermore, effective evaluation plays an essential role in assessing the impact of prompt engineering and fine-tuning efforts, ensuring that improvements are real and measurable. In fields such as agent orchestration, repeatable benchmarks become invaluable, helping to guarantee that agents and their components interact reliably.

    One of the standout features of Stax is its provision of both data and tools that enable developers to build benchmarks that merge human judgment with automated evaluators. This versatility allows for extensive customization; developers can import existing, production-ready datasets or create novel datasets using LLMs to generate synthetic data. The framework offers a suite of evaluators for common metrics like verbosity and summarization, while also permitting the creation of custom evaluators tailored for more specific, nuanced criteria.

    Creating a custom evaluator in Stax is a streamlined process. It begins with selecting a base LLM that will serve as the judge. This judge receives a prompt detailing how to evaluate the outputs of the model under test. The prompt outlines various grading categories, each assigned a numerical score between 0.0 and 1.0. Additional instructions dictate the expected response format, allowing the integration of variables that refer to specific elements such as the model’s output, input history, and metadata. For reliability, the evaluator can be calibrated against trusted human ratings through classical supervised learning techniques. Moreover, the prompt can undergo fine-tuning iteratively, enhancing the consistency of ratings to align with trusted evaluators.

    While Google Stax presents a robust solution for AI model evaluation, it exists alongside a range of competitors. Alternatives like OpenAI Evals, DeepEval, and MLFlow LLM Evaluate all have distinct approaches and capabilities, catering to various aspects of evaluation within the AI landscape. Developers looking for flexibility and customized solutions will find distinct value in Stax’s offerings.

    As of now, Stax supports benchmarking for an expanding array of model providers, including industry leaders such as OpenAI, Anthropic, Mistral, Grok, DeepSeek, and Google’s own models. The framework also accommodates custom model endpoints, further extending its utility. The exciting news for developers is that Stax is currently available for free while in beta, although Google has indicated that a pricing model may be introduced once the beta phase concludes.

    Another key consideration for users is data privacy. Google assures that it will not own user data, which includes prompts, custom datasets, or evaluators. Furthermore, the company commits to not using this data to train its language models. However, as users interact with different model providers, it remains crucial to be mindful of those providers’ data policies, as they will apply concurrently.

    In summary, Google Stax is a significant advancement in the realm of AI model evaluation, offering a framework that standardizes and refines the assessment process. As the AI landscape continues to evolve, tools like Stax will be essential for developers seeking to fine-tune their models and ensure optimal performance in real-world applications.


  • Insta360 Just Launched an AI-powered Speaker for Offices That Automatically Takes Meeting Notes

    Illustration

    Insta360 is known for its innovative action cameras, but the company is making bold strides beyond this realm into enterprise technology. With their latest product, the Wave, they are clearly signaling a strategic pivot towards the business market, aiming to revolutionize how meetings are conducted. This new AI-powered speakerphone not only enhances audio quality but is designed to take notes, transcribe conversations, and generate summaries, positioning itself as an essential tool for office environments.

    The Wave stands out from traditional speakerphones through its impressive design. Unlike the ubiquitous and often disregarded hockey puck-shaped speakers, the Wave presents itself as a tall, sleek cylindrical tower that commands attention on any conference table. Available in matte black and an arctic white variant, it combines aesthetics with functionality. The distinctive cylindrical shape, along with a thin vertical LED strip that indicates its operational status, reflects a modern design philosophy that aims to appeal to a corporate audience.

    Thoughtfully engineered, the Wave features a weighted base for stability during discussions, while the USB-C port and power button are discreetly placed at the back to maintain its elegant appearance. One of the most unique attributes of this device is its telescoping feature; the entire speaker section can be extended to reveal a circular touchscreen interface. This multifaceted design not only provides a user-friendly interface for setting adjustments but also enables a clean, minimalist look when the device is stowed away.

    At the heart of the Wave’s functionality is an advanced 8-microphone 3D array. This cutting-edge setup captures audio at a professional 48kHz sampling rate, equipped with automatic gain control that ensures every participant’s voice is heard clearly. With a pickup range extending up to 16 feet, users can comfortably move around the space without compromising audio quality. Early reviews have highlighted the speaker’s exceptional clarity in voice reproduction during calls, making it an excellent choice for team discussions, although some opinions suggest that music playback could benefit from improved bass response.

    The premier innovation of the Wave lies in its AI capabilities. The device can transcribe meetings in real time, effortlessly identifying different speakers and producing summaries that streamline follow-up discussions. Utilizing customizable templates, the intelligent system quickly organizes action items, key decisions, and future agendas into concise bullet points. This feature alone is a game-changer for corporate teams looking to maximize efficiency and minimize the manual labor associated with meeting notes.

    Furthermore, the Wave includes the ability to create custom glossaries tailored to specific industry jargon, enhancing its accuracy in specialized settings. This thoughtful inclusion assists in ensuring that all attendees are aligned with the terminology discussed, which can be particularly useful in technical or project-heavy industries.

    By entering the realm of AI-enhanced office technology, Insta360 not only diversifies its portfolio but also challenges competitors in the conferencing space. With major players already existing in communications solutions, the Wave’s integration of high-quality audio and advanced AI capabilities creates a unique value proposition aimed at business leaders and organization managers. The potential to automate note-taking and simplify follow-up processes presents tangible benefits, highlighting the product’s value in practical settings.

    In summary, Insta360’s Wave is more than just a speakerphone; it’s a sophisticated tool that represents a merger of technology and thoughtful design. As companies increasingly embrace remote and hybrid work environments, having an effective communication tool that streamlines meetings could significantly boost productivity. The Wave is a step forward in this direction, signaling how traditional office equipment can evolve to meet modern needs, and it could very well become a staple in conference rooms around the world.


  • Anthropic Goes Global: AI Expansion Soars!

    Illustration

    In a significant move poised to reshape the AI landscape, Anthropic is expanding its international operations to meet the soaring demand for cloud-based AI solutions. This strategic initiative comes in response to the global appetite for artificial intelligence, especially as the company aims to triple its international workforce and quintuple its applied artificial intelligence team in the coming year.

    The shift toward a more global AI infrastructure is underscored by Anthropic’s insights regarding cloud usage, which reveals that a staggering 80% of cloud activity originates outside the United States. Notably, countries like South Korea, Australia, and Singapore are leading this charge, showcasing per capita consumption rates that outpace even that of the US. This highlights a profound and growing reliance on AI technology across borders.

    With substantial backing from tech giants Alphabet, the parent company of Google, and Amazon, and boasting a remarkable valuation of $183 billion, Anthropic has carved a niche for itself by developing AI models that excel in various applications, particularly in coding. This specialization has rendered their large language models (LLMs) enviable in the marketplace, fueling demand from businesses needing advanced AI solutions.

    The power of Anthropic’s cloud platform is reflected in its impressive growth trajectory, which has seen the company expand its global business customer base from fewer than 1,000 to over 300,000 in just two years. This rapid expansion speaks volumes about the increasing adoption and reliance on cloud-based AI technologies across various sectors.

    Financially, Anthropic’s growth story is equally compelling. The company has elevated its annualized revenue run rate from approximately $1 billion at the beginning of last year to an astounding $5 billion by August of the same year. This remarkable financial performance is indicative of the heightened enthusiasm for AI and its transformative potential in various industries.

    To sustain this momentum and keep pace with international demands, Anthropic is set to recruit over 100 new employees in key European locations such as Dublin, London, and Zurich. In addition, they are establishing their first Asian office in Tokyo, with plans for further expansions throughout Europe. These steps solidify Anthropic’s commitment to being a formidable player in the global AI arena.

    The global expansion initiative is being spearheaded by Chris Sciauri, who has recently taken on the role of International Managing Director. His appointment, alongside that of Paul Smyth as Chief Commercial Officer, is indicative of the company’s strengthened leadership team. Both leaders are tasked with guiding Anthropic’s endeavors in international markets.

    Chris Sciauri remarked on the exceptional global demand for cloud-based solutions, highlighting that industries ranging from financial services in London to manufacturing in Tokyo are increasingly leveraging cloud technologies to enhance their core operations. This demand reinforces the notion that organizations worldwide are banking on AI to streamline and optimize critical business functions.

    In alignment with its growth strategy, Anthropic has also secured a significant partnership with Microsoft. This collaboration aims to integrate Anthropic’s cutting-edge cloud models into Microsoft’s Copilot assistant, marking a pivotal shift for Microsoft’s generative AI chatbot, which has historically relied heavily on OpenAI’s technology. The integration of Anthropic’s models into the suite promises to enhance capabilities and provide a diverse array of AI solutions, fostering innovation within the industry.

    Anthropic’s commitment to enhancing its AI models ensures continuous improvement and development, making it a key player in the ongoing evolution of artificial intelligence solutions globally. As the company advances toward its ambitious plans for growth and expansion, its impact on the AI landscape is sure to be significant, particularly in driving advancements in cloud-based technologies.


  • AI Optimism To Retail Investors Push: Three Factors Fuelling China’s Stock Market Rally

    Illustration

    China’s stock market is undergoing a remarkable rally this year, demonstrating resilience amid various economic concerns. This upward momentum is primarily driven by three interlinked factors: a wave of optimism surrounding artificial intelligence (AI), robust engagement from domestic retail investors, and a series of strategic government policies aimed at promoting technological self-sufficiency.

    Central to this market resurgence is the rising optimism about AI technologies. Investors are increasingly convinced that advancements in AI will not only bolster productivity but also revolutionize various sectors, including retail, finance, and manufacturing. This optimism has been infectious, encouraging more capital inflow not just from domestic participants but also from foreign investors who recognize the potential of companies poised to benefit from AI innovations.

    The increase in retail investor activity is another crucial element propelling the market. Chinese retail investors, known for their significant participation in the stock market, have shown a renewed enthusiasm for equity investments this year. This engagement has provided a solid foundation for the market rally, effectively offsetting some of the hesitancy stemming from concerns about economic health. The growing dominance of retail investors has also signaled a shift in market dynamics, where individual decision-making plays a pivotal role in shaping market trends.

    Parallel to these developments, the Chinese government has been proactive in enacting policies that bolster the technology sector, particularly in AI. Initiatives aimed at ensuring self-sufficiency in technology have not only inspired confidence among investors but have also solidified the groundwork for long-term growth in the sector. By prioritizing technological advancement and providing necessary support to key players, the government is effectively laying the foundation for sustained market enthusiasm.

    This confluence of AI optimism, retail investor support, and government policy has translated into significant gains for Chinese equity markets. For instance, benchmarks like the CSI 300 and the Hang Seng Tech Index have experienced sharp increases, indicating strong overall market performance. The Shanghai Composite Index has surged approximately 14% year-to-date, even achieving a decade-high earlier this month. Meanwhile, the Hang Seng Index boasts remarkable returns exceeding 33% in 2025 alone, showcasing the effectiveness of this rally.

    These statistics underline China’s emerging position as a competitive player in the regional and global markets. The resilience displayed by indices like the Shanghai Composite and Hang Seng Index not only indicates a recovery from previous downturns but also highlights a shift in investor sentiment that could have long-term implications for the market. As China continues to harness AI capabilities, it stands poised to attract further investments, positioning itself as an attractive hub for tech-centric growth.

    In conclusion, the dynamics currently fueling China’s stock market suggest a complex interplay of optimism, strategic investment behavior from retail investors, and responsive government policy. Moving forward, these elements will be critical to watch, as they may define the course of the market in the upcoming months. As AI technologies continue to evolve, their impact on the economy and, in turn, the stock markets will be significant, providing investment opportunities that could be capitalized upon by savvy investors.


  • MySQL AI Introduced for Enterprise Edition

    Illustration

    Oracle has recently unveiled MySQL AI, a powerful suite of AI-driven capabilities designed specifically for the MySQL Enterprise edition. This introduction is particularly pertinent for organizations focusing on analytics and AI workloads in expansive, large-scale deployments. However, the announcement comes with an air of uncertainty within the MySQL community, as concerns about the future of the beloved Community edition intensify. The worry stems from possible vendor lock-in and the implications of recent internal layoffs at Oracle.

    The innovative features of MySQL AI include advanced vector storage and search capabilities, enabling enterprises to seamlessly create retrieval-augmented generation (RAG) applications directly on MySQL. This functionality eliminates the need for separate vector databases, simplifying the integration process significantly. Moreover, MySQL AI is crafted to work harmoniously with leading large language models, accelerating AI-driven queries and utilizing in-database analytics to enhance workload optimization.

    Nipun Agarwal, Senior Vice President of MySQL Engineering at Oracle, elaborates on the diverse applications enabled by MySQL AI. Among these are agentic workflows tailored for on-premise use, ranging from financial fraud detection through intricate bank transaction oversight to inventory management and demand forecasting. The flexibility of MySQL AI allows developers to build AI applications that access data directly from the MySQL database or file system, all without necessitating data movement or complex integrations. Additionally, the option to migrate applications to MySQL HeatWave in the cloud enhances operational versatility.

    The capabilities of the new AI engine are built upon four cornerstone components: Generative AI, which empowers users to extract accurate and contextually relevant information from their documents residing in local file systems; Vector Engine, which allows developers to create vectors from documents and manage them within a vector store in InnoDB; AutoML, which streamlines common training tasks like algorithm selection, data sampling, and hyperparameter optimization; and lastly, NL2SQL, a conversion tool that utilizes LLMs enabling developers to interact with the database using natural language queries.

    To further enhance developer productivity, MySQL Enterprise offers native support for JavaScript stored programs. This allows developers to use GenAI APIs to write JavaScript code that interfaces directly with MySQL data. A significant addition to the MySQL ecosystem is the introduction of MySQL Studio — a unified and comprehensive interface for MySQL AI. Agarwal notes that MySQL Studio presents an intuitive, integrated environment comprising an SQL worksheet, a chat feature for querying documents from the vector store, and an interactive notebook for crafting machine learning and generative AI applications.

    The launch of interactive notebooks is particularly noteworthy as they are compatible with Jupyter. This feature allows developers to import, share, and collaborate on existing notebooks, fostering a more connected and innovative development culture. However, this progressive move also emerges against the backdrop of Oracle’s strategic focus on strengthening MySQL HeatWave, their managed MySQL Enterprise database service on OCI, raising questions about the open-source trajectory of MySQL in the future.

    Concerns among industry leaders regarding MySQL’s direction have surfaced, exemplified by comments from Patrik Backman, the CEO at OpenOcean and co-founder of MariaDB. Backman reflects on MySQL’s original value proposition of openness and independence from lock-in scenarios, emphasizing that the features most desired by enterprises — such as analytics, machine learning, and vector capabilities — now appear increasingly embedded within the HeatWave framework, which could restrict users’ choices and cloud them in deeper dependence on Oracle.

    In summary, the introduction of MySQL AI represents a significant leap forward in the integration of AI capabilities within enterprise-level databases. While it presents notable advantages and opportunities for innovation, it also raises essential discussions about the balance between commercial interests and the open-source foundations that once defined MySQL. As the landscape evolves, business leaders, developers, and investors must navigate these complexities to harness the full potential of these groundbreaking advancements.


  • Is the outrage over AI energy use overblown? Here’s how it compares to your Netflix binges and PS5 sessions

    Illustration

    The debate surrounding artificial intelligence (AI) and its energy consumption has become increasingly prevalent in conversations about sustainability and technology. Headlines have claimed that AI’s energy demands rival those of entire countries, raising concerns about its environmental impact. However, a closer examination reveals an intriguing comparison between the energy used by AI and that of more familiar activities, such as streaming Netflix or playing on a PlayStation 5.

    Recent reports, including one from Google, provide more concrete data on the power consumption of AI systems. Specifically, Google has published median energy figures for its Gemini text prompts, revealing an average usage of just 0.24 watt-hours (Wh) per prompt. While this statistic is enlightening, it comes with certain limitations; for instance, it only accounts for text-based outputs and does not factor in the energy used for image or video generation.

    The critical question arises: how does the energy consumption of a single AI prompt measure up against daily activities? Let’s dive into this comparison. Overall, the power utilized by one AI prompt, which amounts to 0.24 Wh, equates to only 1.5% of the energy required to fully charge a new iPhone 17. In terms of streaming video, this consumption is just under 10 seconds of playback on a 55-inch television.

    In reality, the majority of electricity used during a streaming session is attributed to the end device itself. For example, when enjoying video content at home, approximately 99.97% of the electricity consumed is used by the television, with data center contributions making up a mere 0.03%. This trend continues for laptops and smartphones, where data center energy use represents about 0.4% and 1.6% of the total energy consumption, respectively.

    Considering AI’s power usage specifically from the data center perspective offers additional insights. While 0.24 Wh for an AI prompt may seem significant, it pales in comparison to the energy consumption associated with more intensive tasks, such as cloud gaming. In fact, the same amount of energy used for one AI prompt corresponds to approximately 3.3 seconds of playtime in a cloud gaming scenario.

    So, how does this translate to daily usage? If we take into account the total number of active users and their cumulative prompts throughout the day, it’s estimated that each user engages with AI around 10 to 20 times daily. This calculation leads to an average energy consumption of roughly 3.6 Wh per user per day—representing only about 0.03% of a user’s overall daily electricity use. This figure is significantly less than the energy wasted by an indicator light on electronic devices.

    The evidence suggests that while AI technology is under scrutiny for its energy demands, it is essential to contextualize its usage against traditional activities that consume far greater amounts of electricity. While the conversation around AI and energy consumption is valid, it often fails to weigh the actual impact accurately. Thus, consumers can rest assured that their nightly Netflix binges likely have a much larger ecological footprint than their interactions with AI.

    This assessment not only provides transparency about the energy demands of AI but encourages a broader conversation about our daily power consumption patterns. By examining our habits and how they compare to technologies like AI, we can make informed choices that favor sustainability. In the end, the discussion surrounding AI’s energy use is not merely about the tech itself but about how we interact with various technologies in our lives.


  • Scout AI Partners with Hendrick Motorsports Technical Solutions on NOMAD – Defense UGV Automated by Fury

    Illustration

    In an exciting development for the defense and technology sectors, Scout AI Inc. has partnered with Hendrick Motorsports Technical Solutions (HMS) to unveil NOMAD, a next-generation unmanned ground vehicle (UGV) powered by Scout’s innovative Fury autonomy system. Announced in September 2025, this collaboration signifies a significant step forward in the design and functionality of robotic systems intended for complex tactical operations.

    NOMAD showcases the latest advancements in Scout’s Fury system, now equipped with its fastest foundational model tailored specifically for compact robotic platforms. These enhancements promote agility and speed, enabling NOMAD’s deployment in various challenging mission environments. Combining cutting-edge technology with practical applications, NOMAD is designed to operate autonomously even beyond line-of-sight, thereby enhancing its operational effectiveness.

    One of the standout features of NOMAD is its second-generation Fury hardware stack, touted for being more than 90% smaller and significantly more energy-efficient than its predecessors. This compactness does not compromise performance, as NOMAD maintains low-signature capabilities and passive-sensing technologies, which are crucial in tactical scenarios requiring stealth and discretion.

    The partnership highlights the shared vision of both companies to expand the horizons of autonomous systems beyond traditional applications. Colby Adcock, Co-Founder and CEO of Scout AI, emphasized the versatility of the Fury system, stating, “We’re just beginning to unlock its potential across ground, air, sea, and space domains.” This adaptability demonstrates a forward-thinking approach to military operations, potentially transforming how missions are executed in diverse terrains.

    Building upon a foundation of camera-only autonomy, NOMAD integrates Vision-Language-Action (VLA) reasoning. This sophisticated capability is particularly noteworthy as it eliminates the need for expensive and often fragile sensor equipment. Instead, Fury exclusively employs learned models, allowing NOMAD to mimic human judgment in real-time scenarios, which is paramount in rapidly changing and unpredictable environments.

    The implications of NOMAD extend beyond mere technical advancements; they address real-world military needs. Rhegan Flanagan, Director of Government Programs at HMS, highlighted this partnership’s commitment to enhancing the capabilities of servicemembers. Flanagan stated, “Partnering with Scout AI allows us to combine world-class vehicle engineering with cutting-edge autonomy to deliver NOMAD—a commercial platform designed to give our servicemembers greater capability, protection, and confidence on the battlefield.” This focus on improving the safety and operational effectiveness of personnel demonstrates a significant commitment to innovation within military logistics.

    As technology continues to evolve, the intersection of advanced artificial intelligence and military applications is increasingly paramount. Scout AI’s collaboration with Hendrick Motorsports aims not only at perfecting UGV performance but also at ensuring mission safety and success. As NOMAD becomes operational, its ability to follow a human operator from a safe distance while integrating various payloads for light tactical missions may revolutionize logistical support in defense operations.

    In conclusion, the launch of NOMAD represents a promising development for the future of unmanned systems in military contexts. By harnessing the advancements in AI and autonomy through the Fury system, Scout AI and Hendrick Motorsports are set to redefine the capabilities of unmanned ground vehicles. This forward-looking initiative embodies the potential of technology to enhance the effectiveness and safety of military operations, proving essential for future missions in evolving landscapes.