- 
Law firms bill clients by the hour. AI is beginning to reshape that model  The legal industry has long followed the traditional model of billing clients by the hour, a practice deeply ingrained in its business structure. However, the advent of artificial intelligence (AI) is beginning to fundamentally reshape this model. With AI technologies streamlining routine legal tasks, law firms are witnessing a significant reduction in the time required for research and documentation, prompting shifts in how clients are charged for services. According to various law firms, AI has managed to decrease the time necessary for research and documentation by about 20-30%. In high-stakes cases, the time savings can be even greater. This advancement in legal technology is now being met with increased client expectations and a desire for transparency regarding the use of AI-powered tools. As Varun Khandelwal, founder of the AI platform Jurisphere, explains, “Consider an arbitrator or a lawyer with 10,000 pages in a case, needing a chronology of events. Previously, this might have consumed a month. With Jurisphere, it takes under ten minutes.” This showcases just how radically AI can transform legal workflows. Generative AI, in particular, is making waves by influencing 40–60% of daily legal workflows, and those numbers are expected to grow as technology continues to evolve. Jurisphere’s client base includes notable law firms like MZM Legal, Veritas Legal, Wadia Ghandy, and IndusLaw, highlighting an industry trend towards the adoption of innovative AI solutions. These tools leverage generative AI to perform numerous tasks such as legal research, document review, compliance checks, and drafting standard contracts, effectively automating labor-intensive processes. Despite these advancements, it is essential to note that many complex legal tasks still demand substantial human expertise, including nuanced legal analysis and negotiations, which necessitate seasoned professionals in the field. As a result, larger law firms are beginning to prepare for a shift towards what is being termed “hybrid billing,” where some work will be charged at fixed or flat fees depending on the use of AI, while more intricate legal issues will be billed according to hourly rates. Suchorita Mookerjee, the chief technology officer at MZM Legal, states that while AI-powered legal research tools have not yet drastically altered the conventional billing process, the industry is clearly trending in that direction. They have seen a 25% reduction in the time spent on research tasks, underscoring the technology’s efficiency. However, this has introduced the necessity of heightened quality checks to maintain service integrity. Client engagement in these discussions is becoming increasingly common. A senior partner at one of India’s top three law firms noted that clients are now asking how much of the legal work is facilitated by AI. “We have to disclose the quality and the quantity of work done by our in-house AI tool. The billing is getting decided only after that,” the partner added. This shift illustrates that clients are taking an active interest in understanding the value they receive for their investment. Smaller law firms, particularly those operating on a fixed fee basis, have been among the first adopters of AI technologies. However, there remains a challenge in passing on cost savings to clients, as many small-to-medium-sized firms are finding it difficult to maintain profitability under fixed fee arrangements due to rising operational costs. Despite this, the legal AI landscape is steadily evolving, and those firms that embrace AI could gain a competitive advantage in delivering efficient, cost-effective legal services. As AI continues to permeate the legal field, it is clear that both law firms and clients will need to adapt to the changes. The integration of AI tools into law practice not only holds the potential to enhance efficiency and reduce costs but also to elevate the overall quality of legal services. By rethinking traditional billing models in light of technological advancements, firms can better align their pricing structures with the evolving demands of their clientele. 
 
- 
DARPA unveils winners of AI challenge to boost critical infrastructure cybersecurity  The Defense Advanced Research Projects Agency (DARPA) recently announced the outcome of its AI Cyber Challenge, designed to enhance cybersecurity in vital infrastructure systems through innovative AI applications. This two-year competition culminated during the renowned DEF CON hacker convention in Las Vegas, where it was revealed that Team Atlanta emerged victorious, showcasing the power and potential of AI-driven solutions in tackling cybersecurity challenges. Team Atlanta, comprised of experts from prestigious institutions such as the Georgia Institute of Technology, Samsung Research, the Korea Advanced Institute of Science & Technology, and the Pohang University of Science and Technology, took first place in this challenge. The team’s success underscores a collaborative effort bridging academia and industry, which is crucial for addressing modern threats faced by critical infrastructure. In second place was Trail of Bits, a small business based in New York City, which has carved a niche in providing cybersecurity consultancy and software services. The third position was secured by Theori, a group of AI researchers and security professionals from the U.S. and South Korea. The diverse blend of expertise among the finalists highlights the importance of multidisciplinary approaches in developing robust solutions to cybersecurity vulnerabilities. One of the critical aspects of the AI Cyber Challenge was its objective of developing AI models that could automatically identify and patch vulnerabilities within open-source code. Open-source tools are widely used due to their accessibility, yet they are often susceptible to cyber exploitation. This challenge aimed to generate advanced solutions that would address these weaknesses in a scalable and efficient manner. During the competition, seven finalist teams uncovered a staggering 70 synthetic vulnerabilities specifically created for the event. Additionally, they identified 18 previously unknown real-world flaws, which speaks to the effectiveness of their models. The average time taken for these models to patch flaws was an impressive 45 minutes, indicating significant progress in the use of AI for cybersecurity applications. According to DARPA’s director Stephen Winchell, the need for effective cybersecurity solutions is urgent. He emphasized that many existing code bases are burdened with ‘huge technical debt,’ which complicates efforts to maintain security in an increasingly digital world. Winchell noted the challenge of overcoming this issue, suggesting that traditional methods may no longer be sufficient given the scale and complexity of the problem. The application of large language models, similar to those powering popular generative AI tools, was a key driver of innovation during the competition. Notably, major tech firms like Anthropic and OpenAI contributed their model infrastructure, enabling teams to leverage advanced AI capabilities in their solutions. This collaboration between research institutions and tech companies highlights the potential for future advancements in the field. As a result of the competition, four AI models have already been released for public use, with three more on the horizon. These advancements have the potential to significantly improve the security posture of critical infrastructure systems, protecting vital services from potential cyber threats. Open-source projects form the backbone of many software systems in use today, making the outcomes of the AI Cyber Challenge particularly relevant. Discovering and efficiently addressing vulnerabilities in these publicly available code bases is essential to ensure public safety and health. As we continue to rely on digital systems, the methodologies developed during this challenge could pave the way for a more secure future. In summary, DARPA’s AI Cyber Challenge has not only highlighted the incredible potential of AI in addressing cybersecurity vulnerabilities but has also fostered collaboration across various sectors. The contributions made by the winning teams could lead to significant advancements in how critical infrastructure systems are protected in an increasingly interconnected world, underscoring the importance of innovation in combatting cyber threats. 
 
- 
Is AI the reason your flight costs more? What Delta’s new pricing tech really means  In recent times, the intersection of artificial intelligence (AI) and air travel pricing has become a hot topic of discussion, particularly after Delta Air Lines announced a new pricing strategy powered by AI. This revelation sparked controversy, including a stern letter from Congress, underscoring concerns about potential misinformation surrounding the airline’s pricing practices. Delta’s use of AI raises questions for frequent flyers and casual travelers alike about how it might affect airfare. While some wonder if they will face higher prices due to this technological advancement, others remain hopeful for more competitive pricing options. So, what exactly is Delta doing with AI? The Atlanta-based airline recently disclosed that it has begun utilizing AI software to assist in determining ticket prices for approximately 3% of its domestic flights. This move represents a significant shift from traditional methods of pricing, which relied on human analysts and static algorithms to assess market conditions. Delta, in collaboration with the Israeli tech firm Fetcherr, has stated that its AI tool acts as a “super analyst,” constantly analyzing data to make informed pricing recommendations. This evolution in pricing strategy is designed to streamline the complex process that airlines engage in to set fares, taking into account numerous factors such as demand changes, competition on routes, and historical travel data. Historically, airlines have implemented dynamic pricing strategies to tailor fares to reflect market conditions. For years, this has meant higher prices during peak travel times, such as holidays, with occasional discounts when demand drops. Delta’s introduction of AI into this process aims to enhance and accelerate these calculations. The AI tool is still in its early stages, but Delta has ambitious plans. By the end of 2025, the airline projects that AI-enabled pricing could expand to influence nearly 20% of its flight network. This expansion may significantly reshape pricing structures across the board, making the fare-setting process more responsive and potentially more profitable for the airline. However, with the potential for increased pricing power that comes from AI, many consumers and industry experts are left pondering its implications. Could this mean higher fares for the average traveler? Although immediate drastic increases are not anticipated, experts suggest that AI-driven pricing could gradually lead to higher average fares on various routes. Still, it remains to be determined how AI will transform the overall landscape of air travel pricing. Some fear that the technology could widen the gap between low-cost carriers and established airlines like Delta, particularly if AI boosts pricing margins for select routes. Others, however, are hopeful that increased competition facilitated by AI will not only provide better pricing strategies for airlines but also ultimately benefit consumers by offering more travel options and potential discounts. The public’s response to Delta’s AI initiative continues to evolve, with Congressional scrutiny adding pressure to ensure transparency in pricing practices. As Delta navigates this new territory, further developments will likely draw continued attention from both government officials and travelers. In conclusion, Delta Air Lines is forging ahead in the world of AI in airfare pricing. As the airline’s AI tool begins to get integrated more broadly into its business model, the ramifications for consumers, competitors, and the industry as a whole will become clearer. The true impact of this technology on airfare will unfold over time, making it essential for travelers to stay informed about the changes that lie ahead. 
 
- 
Hertz using artificial intelligence at Tampa International Airport to inspect rental cars for damage  Hertz has made a significant move in revolutionizing the car rental experience by incorporating artificial intelligence (AI) technology at Tampa International Airport. This cutting-edge approach aims to enhance the accuracy and efficiency of vehicle inspections, specifically targeting potential damage on rental cars before they are handed over to customers. The AI-powered 360-degree scanners utilized by Hertz are designed to detect a variety of issues, including dents, scratches, tire wear, and even undercarriage damage. Such precision not only streamlines the rental process but also addresses long-standing frustrations related to manual damage inspections. Traditionally, these inspections were rife with subjectivity and inconsistency, leading to concerns over erroneous charges for damages that may not have occurred during the rental period. As Hertz continues to implement this technology, it sets a precedent that could influence the broader rental car industry. Other rental companies, such as Enterprise and Dollar, may follow suit, particularly in major travel hubs. The trend could even extend beyond vehicles, with hotel chains exploring similar AI tools. For instance, Hilton properties operated by 6PM Hospitality are experimenting with AI-powered sensors that monitor for smoke or vaping, which can automatically trigger fines. According to Hertz, the integration of AI in the inspection process introduces essential elements of precision, objectivity, and transparency. In a statement, the company highlighted that the enhanced inspections could breed greater confidence among customers, ensuring they are not unfairly charged for pre-existing damages. This embrace of AI is timely, as the demand for seamless and trustworthy customer experiences continues to rise within the travel sector. However, while the implementation of AI in inspections offers many advantages, it carries inherent risks. Experts warn that customers may still encounter issues, such as receiving fines for damage that they did not cause. As a precaution, renters are advised to document their vehicle thoroughly before and after use. Taking a video of the car’s condition and requesting a copy of the AI-generated inspection report can play a pivotal role in safeguarding against potential disputes. This shift towards AI inspections could reshape customer expectations and operational standards in the rental car industry. With increasing consumer reliance on technology, companies that adapt and innovate stand to gain a competitive edge. The potential for AI systems to reduce wait times, enhance damage accountability, and improve overall customer satisfaction is poised to set new benchmarks within the travel sector. In conclusion, Hertz’s initiative at Tampa International Airport is a landmark development in the intersection of AI and customer service. As technology continues to evolve, the implications for both consumers and service providers communicate significant opportunities for growth and improvement. This proactive approach to vehicle inspections not only enhances operational efficiency but could also serve as a catalyst for further technological adoption across the industry, redefining how customers interact with rental services in the future. 
 
- 
An AI System Found a New Kind of Physics that Scientists Had Never Seen Before  The intersection of artificial intelligence and science is a rapidly evolving area that continues to yield groundbreaking discoveries. A team of scientists from Emory University has recently made significant progress in the field of dusty plasmas using a novel machine learning (ML) model. This development not only corrects longstanding theoretical misconceptions but also exemplifies how AI can contribute positively to scientific advancement. Dusty plasmas are mixtures of ionized gas and charged dust particles, representing a unique state of matter that can be found both in outer space and in terrestrial environments. Examples include wildfires, where charged particles of soot combine with smoke to create a dusty plasma. Until now, understanding the dynamics governing this specific type of plasma had been limited, leaving many questions unanswered. In a revealing study published in the journal Proceedings of the National Academy of Sciences (PNAS), the Emory research team employed their advanced ML model to make what they believe to be the most detailed analysis of dusty plasma dynamics to date. The ML model not only analyzed existing data but also generated new insights into the behavior of particles within these plasmas, leading to precise predictions regarding non-reciprocal forces. Non-reciprocal forces occur when particles in a plasma exert different forces on one another, a phenomenon that has now been precisely quantified thanks to the AI model. According to co-author Justin Burton, the team’s approach avoided the typical “black box” nature of many AI applications, allowing researchers to both understand its workings and present its findings in a comprehensible manner. This transparency is crucial, as it builds trust in AI applications across scientific settings. Burton explains, “Our AI method is not a black box: we understand how and why it works. The framework it provides is also universal. It could potentially be applied to other many-body systems to open new routes to discovery.” The implications of this claim are vast—should the techniques developed for dusty plasma be applicable to other systems, the potential for discovery across a variety of scientific fields expands significantly. The revised understanding of non-reciprocal forces sheds light on phenomena previously only speculated upon. As co-author Ilya Nemenman points out, they discovered a leading particle attracts the trailing particle in a dusty plasma, yet the reversed force is true, with the trailing particle repelling the leading one. This creates a complex dynamic that challenges previous notions and could inform future research avenues in plasma physics. The introduction of this AI model presents immense opportunities for scientists and researchers. Instead of merely serving as a tool for data analysis, the ML model embodies a potential paradigm shift in how new physics can be discovered. It emphasizes an emerging trend of AI models serving as active participants in scientific inquiry, rather than passive assistants, ultimately paving the way for unforeseen advancements. While AI has often been associated with concerns over societal impacts, such as misinformation and job displacement, this case stands in stark contrast. The merits of AI in enhancing scientific understanding and driving innovation continue to emerge, promising rich dividends for research and industry alike. In conclusion, the breakthroughs achieved by the Emory University researchers illustrate not only the capabilities of modern machine learning technologies but also their profound implications for diverse fields of study. As we continue to harness AI’s potential, it may unlock new dimensions within fundamental physics and beyond, allowing for improved predictions and deeper insights into the very fabric of our universe. 
 
- 
Elastic AI SOC Engine helps SOC teams expose hidden threats  The rise of sophisticated cyber threats has made the role of Security Operations Center (SOC) teams more crucial than ever. However, with the increasing volume of alerts and the complexity of investigations, SOC analysts often find themselves overwhelmed. Enter the Elastic AI SOC Engine (EASE), an innovative solution designed to empower SOC teams and enhance their ability to expose hidden threats. EASE is a new serverless, easy-to-deploy security package that integrates seamlessly into existing Security Information and Event Management (SIEM) and Endpoint Detection and Response (EDR) tools. What sets EASE apart is its AI-driven context-aware detection and triage capabilities, which do not require SOC teams to undergo immediate migrations or complete replacements of their current systems. One of the standout features of EASE is its agentless integrations, allowing security teams to start applying AI analysis to alerts right away. Instead of waiting for extensive systems replacements, teams can leverage their existing setups with platforms such as Splunk, Microsoft Sentinel, and CrowdStrike, thereby maximizing their current investments while enhancing their operational efficacy. With EASE, security teams gain access to Elastic’s powerful Attack Discovery capabilities, which utilize AI to triage, correlate, and prioritize alerts efficiently. This not only streamlines the analysis process but also reduces alert fatigue—a common pain point for SOC analysts facing an overwhelming number of alerts each day. The AI-powered alert view comes equipped with summaries and contextual information that assist analysts in making informed decisions rapidly. Another noteworthy feature is the context-aware AI Assistant, which enriches investigations by providing data from internal knowledge sources such as Jira, GitHub, and SharePoint. This assists analysts in conducting nuanced investigations through natural language queries and relevance-aware searches across organizational data. Such capabilities make it easier for teams to uncover coordinated threats that may otherwise go unnoticed. Transparency in AI operations is a core principle of EASE. Organizations have the option to choose an LLM (Language Model) that aligns best with their needs, including the Elastic Managed LLM or their proprietary models. EASE ensures that all AI Assistant responses are cited, detailing the underlying data used in generating those responses. Furthermore, every query, response, and token usage are logged and trackable, making it easier for organizations to maintain a clear understanding of their AI interactions. Operational dashboards further facilitate the enhancement of security measures by providing out-of-the-box metrics. These metrics showcase time savings, detection improvements, and overall return on investment (ROI), thus enabling SOC teams to demonstrate the business value of their security operations succinctly. As cyber threats continue to evolve, having visibility into the ROI of security tools becomes increasingly critical for decision-makers. According to industry experts, EASE addresses a common challenge within the cybersecurity landscape: the need for open and transparent AI integration without having to overhaul existing infrastructures. As Michelle Abraham, a senior research director in Security and Trust at IDC, aptly noted, “EASE helps teams with faster detection and investigation using the tools they already have.” This makes EASE not only a valuable addition to existing practices but also an essential advocate for proactive security measures. In conclusion, the Elastic AI SOC Engine represents a paradigm shift in the operational efficacy of SOC teams. By integrating robust AI capabilities into existing security frameworks, it streamlines investigations, empowers analysts, reduces alert fatigue, and enhances the overall security posture of organizations. For business leaders, product builders, and investors looking to stay ahead in the cybersecurity arena, understanding and potentially adopting Elastic’s EASE could provide a competitive edge in the increasingly complex digital landscape. 
 
- 
Paycom raises 2025 revenue and profit forecasts on AI-driven demand  The ever-evolving landscape of technology is once again highlighted with the latest development from Paycom Software, a prominent player in payroll processing. Recently, Paycom announced a notable increase in its revenue and profit forecasts for fiscal year 2025, largely attributed to an upsurge in demand driven by the integration of innovative artificial intelligence (AI) features into its employee management services. This strategic pivot has not only enhanced the company’s service offerings but has also resulted in a significant rise in its stock prices, which jumped by 7 percent following the announcement in after-hours trading. As per the revised estimates, Paycom now anticipates its total revenue for 2025 to fall within the range of $2.05 billion to $2.06 billion, an increase from its previous guidance of $2.02 billion to $2.04 billion. Notably, these projections surpass the average analyst expectations of $2.03 billion, providing a positive outlook amid fluctuating market conditions. Such revisions underscore Paycom’s robust position in leveraging technology to propel growth, especially in a setting where many firms are struggling to maintain their market positions. Integral to this transformation is Paycom’s innovative ‘smart AI’ suite, which streamlines various time-consuming tasks associated with workforce management. Features such as automated job description generation and predictive analytics to identify employees at risk of leaving have resonated well with businesses seeking more efficient solutions. The automated capabilities not only save time but also empower employers to make data-driven decisions, enhancing overall workforce management. CEO Chad Richison emphasized the company’s commitment to expanding its technological advancements by stating, “We are well positioned to extend our product lead and eclipse the industry with even greater AI and automation.” This statement reflects Paycom’s strategic vision to continuously improve its offerings while ensuring that clients can adapt to the complexities of modern workplace dynamics. Furthermore, the projections for core profit also saw an upward revision, now estimated between $872 million to $882 million, compared to earlier expectations of between $843 million and $858 million. This growth signals a positive trajectory, especially considering that the payroll processor managed to report a revenue of $483.6 million for the second quarter ending June 30, surpassing analysts’ estimates of $472 million. The adjusted core profit of $198.3 million during this period represents a significant year-over-year jump from $159.7 million, demonstrating the effectiveness of their AI enhancements. Interestingly, these optimistic forecasts come at a time when U.S. labor market conditions appear to be deteriorating, as indicated by a recent Labor Department report. The report highlighted weaker-than-expected employment growth in July and a downward revision of nonfarm payroll counts for the preceding two months, totaling a 258,000 job reduction. This context makes Paycom’s achievements even more remarkable, showcasing its ability to innovate and thrive even when external market conditions are challenging. In summary, Paycom’s recent financial forecasts and the strategic implementation of AI within its business model represent a significant advancement within the payroll processing industry. The company’s proactive approach to technology not only enhances its operational efficiencies but also positions it favorably against competitors. As businesses strive to simplify and optimize their workforce management, Paycom’s offerings become increasingly relevant, providing tangible solutions that cater to the evolving demands of the modern workplace. 
 
- 
OpenAI’s low-cost, open-weight AI models are here. But are they truly ‘open’?  OpenAI has recently made a significant shift in its approach to artificial intelligence by releasing two new open-weight models, gpt-oss-120B and gpt-oss-20B. This marks the first time in six years that the company has offered such models, which can now run directly on personal devices such as laptops and be fine-tuned for a variety of applications. This release is particularly noteworthy as it comes after multiple delays attributed to safety concerns. In a blog post, OpenAI expressed their excitement about providing these best-in-class open models. They aim to empower everyone from individual developers to large organizations and government entities to run and customize AI on their own infrastructure. The timing of this launch is particularly interesting, following the earlier release of DeepSeek’s cost-effective, open-weight R1 model, which may have influenced OpenAI’s strategy to diversify away from closed proprietary models that have dominated their offerings since the 2019 launch of GPT-2. Alongside these models comes the anticipation of the GPT-5 model that OpenAI is expected to release shortly. So what do we know about the new gpt-oss models? The gpt-oss-120B model, boasting an impressive 117 billion parameters, is capable of running on a single 80GB GPU, while its smaller counterpart, the gpt-oss-20B, can be deployed even on a laptop with merely 16GB of memory. Both models have been released under the Apache 2.0 license, allowing developers to download and host them freely on platforms like Hugging Face. Microsoft is also adapting a GPU-optimized version of the gpt-oss-20B model for Windows devices, further broadening the reach and accessibility of these models. The sheer number of parameters in an AI model can often correlate with its problem-solving capabilities. By conventional understanding, models with a higher parameter count generally exhibit better performance. OpenAI, however, claims to have made these new models more efficient using a mixture-of-experts (MoE) technique, which DeepSeek also employs. This method enhances energy efficiency and reduces computational costs by activating only a small fraction of the model’s parameters for specific tasks. In addition to this, OpenAI has employed grouped multi-query attention to optimize inference and memory efficiency, which diverges from the multi-head latent attention technique seen in DeepSeek’s V2 model. This attention mechanism is particularly important for enhancing performance in extensive applications where quick and efficient response times matter. Interestingly, the gpt-oss models support a maximum context window of 128,000 tokens, a notable feature that expands the potential for context-rich interactions, further enhancing their utility in various applications. As for performance comparisons, the gpt-oss-120B model has been reported to match the performance levels of o4-mini, OpenAI’s most cutting-edge model to date. This indicates that even though the open-weight models are positioned as more accessible options, they do not compromise on performance, making them viable alternatives for businesses and individual developers alike. The release of these open-weight models signifies a critical moment in AI history as it opens the door for broader participation in the AI landscape. By allowing developers to customize models according to their specific needs and run them on local infrastructures, OpenAI encourages innovation and reduces dependency on cloud-based solutions. This move has vast implications for businesses looking to leverage AI tools tailored precisely to their operational challenges. However, questions remain regarding the true openness of these models, stirring discussions in the AI community about the balance between accessibility and control over powerful AI systems. As OpenAI champions this new direction, stakeholders will be watching closely, hoping it catalyzes a wave of advancements while also emphasizing the importance of responsible AI development. 
 
- 
Analysis-Europe’s old power plants to get digital makeover driven by AI boom  In an innovative turn of events, Europe’s aging coal and gas-fired power plants are poised for a significant transformation, driven by the burgeoning demand for artificial intelligence (AI) technologies. Major tech giants, including Microsoft and Amazon, are eyeing these sites to convert them into data centers, leveraging their existing infrastructures, which offer convenient access to both power and water. Utilities like France’s Engie, Germany’s RWE, and Italy’s Enel are at the forefront of this shift, looking to capitalize on the rapid increase in energy demands spurred by AI advancements. By repurposing old power sites into advanced data centers, these utilities aim not only to mitigate the financial implications associated with shutting down outdated facilities but also to pave the way for future renewable energy developments. The appeal of these data center conversions lies in their dual benefits for utility companies: they can recoup costs while also enhancing their sustainability profiles. Bobby Hollis, Microsoft’s vice president for energy, highlighted that these sites come equipped with essential components such as water infrastructure and heat recovery systems, facilitating a more seamless transition to high-tech operations. This forward-thinking approach tackles two bottlenecks in the AI industry—secure power grid connections and efficient water cooling systems. Amazon’s EMEA energy director, Lindsay McQuade, pointed out the advantages such conversions will present in terms of permitting processes. By utilizing existing power plant sites that already have significant infrastructure in place, they anticipate faster approval times, accelerating the overall transition to data centers. For utilities, the options vary; they can choose to lease the land for data centers or take on the construction and operation themselves. This flexibility opens avenues for lucrative long-term power contracts with tech companies, establishing a steady revenue stream. Simon Stanton, who heads Global Partnerships and Transactions at RWE, emphasizes that such agreements transcend mere land leasing; they foster long-term business relationships that mitigate risks and support infrastructure investments. As environmental regulations tighten, the necessity for change becomes unmistakable. The European Union has targeted the closure of the majority of its hard coal and lignite plants by 2038 to meet climate commitments, with hundreds of plants already offline since 2005. The transition towards repurposing these aging facilities forms a vital component of Europe’s shift towards a greener energy landscape. Powering data centers demands vast quantities of energy, often in the range of several hundred megawatts to a gigawatt or more. Gregory LeBourg, environmental program director at French data center operator OVH, notes the compelling economics behind these data deals. Given that tech firms are willing to pay a premium for low-carbon power—up to 20 euros per megawatt-hour—the financial incentives are substantial. Such premiums can lead to long-term contracts potentially worth hundreds of millions or even billions of euros over time, as calculated by experts. This strategic pivot not only revitalizes aging infrastructure and generates new revenue streams for utility companies but also aligns with global efforts to reduce carbon footprints and invest in sustainable technologies. As the power sector continues to adapt to the demands of the digital era, the collaboration between tech firms and traditional utilities showcases a promising pathway toward a more sustainable energy future. 
 
- 
We need to relearn how to use AI when it’s on our bodies  The advent of wearable technology marks a new frontier for artificial intelligence, particularly with the launch of the new Gemini AI integrated into popular smartwatches like the Samsung Galaxy Watch 8 and the Pixel Watch. This technology signifies a monumental shift—AI is no longer confined to mobile devices and computers but is now embedded in the very fabric of our daily lives, quite literally, on our wrists. The implications of this shift are profound, as it promises to enhance our ability to interact with technology in a more seamless and convenience-driven manner. As AI advances, the transition to wearable technology is anticipated to bring forth a lifestyle imbued with efficiency. Imagine the convenience of managing our everyday tasks with a simple voice command, eliminating the common friction of fumbling with devices. Instantaneous access to helpful AI assistants while on the go could revolutionize decision-making and productivity. However, this perceived ease also presents a challenge. The author has spent over two decades mastering interactions with existing voice assistants like Google Assistant, developing an in-depth understanding of its limitations and capabilities. Changing the platform of interaction raises a critical question: how does one relearn to engage with AI when it’s more accessible yet fundamentally different than what we are accustomed to? During hands-on testing of Gemini on the Galaxy Watch 8, a sense of disorientation emerged. Despite the advancements made in natural language processing, the user experience across various platforms posed a series of hurdles. The issue wasn’t merely the technology’s responsiveness but also a cognitive glitch when transitioning from reliance on a smartphone to a wearable device. The familiarity ingrained from years of interaction with Google Assistant did not seamlessly translate into the new device context, which could frustrate users who expect similar efficiency from their AI. For instance, voice commands like ‘Hey Google’ must be fluid and instantaneous, but hesitation could lead to unintended actions—leading to awkward moments instead of smooth interactions. The potential utility of such commands is vast, as demonstrated in Samsung demos showcasing practical applications, from finding local gyms to managing workout routines. Yet, the author’s attempts to engage with Gemini highlighted the inconsistency of its responses and its ability to understand more intricate contextual queries. Notably, using Gemini to start a run based on calorie counts brings to light an interesting challenge: intuitive requests require specificity. For example, the misunderstanding around vague commands led to unintended and unmotivating outputs when trying to gauge calorie needs. This misalignment can create confusion, impacting user experience and limiting the efficacy that was intended in the interaction. Moreover, navigating through applications with Gemini reveals another layer of complexity. The AI’s reliance on compatible applications introduces constraints that can hinder its user-friendliness. Instances of miscommunication, such as failing to link with messaging apps and returning basic lists instead of personalized recommendations, underscore the limitations of early AI adoption within wearables. Despite these challenges, the potential for AI in wearable technology is immense. The leap of AI functionality to smartwatches could indeed streamline and enhance user experiences. Yet, users must undergo an adjustment period, relearning the dynamics of interaction to maximize AI’s benefits. As developers iterate on solutions, bridging the gap between expectations and efficiency will be crucial to fostering a seamless relationship with AI. The journey towards fully utilizing AI on wearables reflects broader challenges in adapting to technological evolution. The ability to leverage AI advancements for practical applications in health monitoring, fitness, and everyday convenience lies at the intersection of understanding, adaptability, and innovation. As we embrace Gemini and like technologies on our wrist, the opportunity to reshape our daily interactions with AI opens a realm of possibilities, albeit with several growing pains along the way. 
 
