-
AMD’s Instinct MI450 Reportedly Secures A Major AI Customer
In a significant development for the artificial intelligence hardware landscape, AMD has reportedly secured a major customer deal for its forthcoming Instinct MI450 GPU accelerators. This move positions AMD as a key player in the AI revolution, amidst supply chain constraints that have driven many leading technology companies to seek robust and reliable GPU solutions.
Recent industry chatter indicates that Anthropic, an influential AI firm known for its advanced models, is poised to incorporate AMD’s next-generation Instinct MI450 GPUs into its server infrastructure. This partnership could enhance Anthropic’s computational capabilities while addressing the growing demand for AI processing power, further amplifying AMD’s presence in the increasingly competitive AI market.
The need for high-performance computing resources is critical as AI applications proceed to necessitate more sophisticated handling of data. AMD’s decision to target “Open-AI” scale customers is strategically aligned with industry needs, building on existing collaborations with giants like OpenAI and Meta, the latter having committed to a massive 6 gigawatts for multiple generations of Instinct accelerators.
AMD’s new MI450 series, unveiled this year, is built upon the innovative CDNA 5 architecture, showcasing impressive specifications that make the GPUs highly attractive for enterprises investing in AI technology. These specifications include substantial HBM4 memory capacity and bandwidth improvements, which promise to elevate AI workloads to unprecedented levels. With a performance metric of 40 PFLOPS (FP4) and 20 PFLOPS (FP8), the Instinct MI450 GPUs are set to double the computational output of the current MI350 series—currently a top-selling product in AI data centers.
Among the enhancements, the new HBM4 memory offers a remarkable capacity increase from 288GB to 432GB, alongside an astounding bandwidth of 19.6 TB/s, which is over double that of the predecessor’s 8 TB/s performance. These advancements signify critical strides in memory architecture that are essential for managing the vast data processing requirements of modern AI applications.
AMD is actively positioning its Instinct MI400 series against competitor NVIDIA’s Vera Rubin GPUs, emphasizing notable performance comparisons. Highlights mention AMD’s memory capacity, memory bandwidth, and computational performance that rival these competitors, showcasing AMD’s commitment to matching or exceeding industry standards.
As the GPU market faces tight supply conditions, securing significant deals like the one with Anthropic could provide AMD with a decisive advantage over its competitors. With major players keen to acquire all opportune and viable compute resources, AMD appears well-aligned to meet the surging demands from various sectors relying on artificial intelligence technologies.
Moreover, the timing of this leakage of information regarding AMD’s partnership with Anthropic is crucial. This announcement coincides with Anthropic’s strategic alliance with technology giants Google and Broadcom, which extends their access to advanced AI solutions. This multifaceted approach to AI development positions Anthropic at a competitive advantage, making their anticipated integration of AMD’s technology further significant in the broader landscape of AI innovation.
The implications of AMD’s potential deal with Anthropic underscore the pressing necessity for AI firms to ensure robust computing infrastructures that can handle next-generation AI workloads. As AI technology continues to advance, the race among hardware manufacturers like AMD and NVIDIA will likely intensify, driven by the demand for greater processing power and efficiency.
In summary, AMD’s ongoing developments with the Instinct MI450 GPUs not only represent a leap in hardware capability but also symbolize a broader response to market needs. The commitment of leading AI firms to modernize their computing capabilities through such partnerships with AMD will undoubtedly impact the AI sector in the coming years, bringing forth innovations and applications that stretch the boundaries of what is currently possible.
-
AI News
The landscape of robotics is undergoing a significant transformation, and Physical Intelligence, a San Francisco-based startup, is at the forefront of this revolution. Recently, the company published groundbreaking research revealing its newest model, π0.7, which has demonstrated the remarkable ability to direct robots to undertake tasks they were never explicitly trained on. This radical advancement caught even the researchers off guard and suggests that AI in robotics may soon experience a pivotal transformation akin to the progress seen in large language models.
At the heart of Physical Intelligence’s latest findings is a concept termed compositional generalization. This refers to the model’s capability to integrate skills acquired in varying contexts to address new problems effectively. Historically, training robots has relied heavily on rote memorization—collecting task-specific data and developing specialized models for each job. With π0.7, the company asserts that this traditional model has been fundamentally altered.
“Once it crosses that threshold where it goes from only doing exactly the stuff that you collect the data for to actually remixing things in new ways,” explains Sergey Levine, co-founder of Physical Intelligence and a professor at UC Berkeley specializing in AI for robotics, “the capabilities are going up more than linearly with the amount of data.”
The implications of this advancement are profound. The research highlights a striking demonstration where the model successfully operated an air fryer—a task it had almost no training data for. In fact, only two instances relevant to the air fryer appeared in the training dataset: one involved a robot merely closing the appliance, while the other recorded a different robot placing a plastic bottle inside it after receiving instructions. Remarkably, π0.7 synthesized those fragments and combined them with broader pretraining data available on the web, resulting in a coherent understanding of how to use the air fryer.
The ability of π0.7 to perform tasks with minimal input showcases a leap in how robots can operate—especially in unfamiliar domains. During tests, the model required zero instructional coaching and could still produce a credible attempt at cooking a sweet potato using the air fryer. When provided with straightforward, step-by-step verbal guidance, similar to what one might give a novice employee, the robot was able to complete the cooking task successfully.
This capability bodes well for the future of robotics, as it opens up the possibility for deployment in new environments where robots could be quickly adapted and enabled to learn in real time without the traditional need for extensive data collection. This adaptability signifies a major paradigm shift in making robotics more efficient and versatile across various sectors.
As robotics continues its journey toward automation, the significance of breakthroughs like that of Physical Intelligence cannot be overstated. If the findings hold up under scrutiny, it could herald advancements that reshape how we think about robotic capabilities in the workplace and beyond. As businesses seek to streamline operations and ensure greater efficiency, the application of such technologies could lead to reduced labor costs and increased productivity.
With robotic systems potentially able to learn and adapt on the fly, organizations will not only benefit from cost savings but also enjoy the flexibility of deploying machines in roles that were previously deemed too complex or variable for automation. As these developments unfold, industry leaders, product builders, and investors must pay close attention, as the implications for business models and operational strategies are profound.
In conclusion, the emergence of models like π0.7 from Physical Intelligence suggests we are on the verge of a new era in robotics, where machines can creatively engage with their environments and learn to perform tasks beyond their initial programming. This represents a leap toward the long-cherished dream of creating a general-purpose robot brain, capable of understanding and executing instructions in a manner akin to human learning.
-
‘Makes it even more disappointing’: Microsoft backs fossil fuel big time with $7 billion deal in race for AI supremacy
In an unexpected pivot, Microsoft is making headlines with its decision to engage heavily in fossil fuel investments, signifying a critical moment in the intersection of technology, energy, and environmental policy. The tech titan has signed a series of methane gas-powered data center deals amounting to nearly 5 gigawatts of capacity. This move raises essential questions about sustainability and commitment to climate goals, particularly in the context of its ambitious plans for AI expansion.
As the demand for AI capabilities escalates, hyperscale data center operators are competing fiercely to secure reliable and abundant power sources. Microsoft, however, appears to be taking a substantial step back from its previous commitments in clean energy. This strategy is exemplified by a significant investment in a 2.5-gigawatt plant in Pecos, Texas, developed in collaboration with oil behemoth Chevron, along with additional projects in Abilene, Texas, and Mason County, West Virginia.
The implications of this decision are profound. Research indicates that these new initiatives could result in a staggering 160% increase in Microsoft’s data center carbon output, potentially yielding approximately 25.25 million metric tons of CO₂ equivalent emissions by 2028. The dissonance between Microsoft’s climate pledges and its current operational choices is becoming increasingly evident. After a commitment made in 2020 to achieve carbon negativity by 2030, the company’s emissions have reportedly increased by at least 30%, challenging the credibility of its environmentally friendly claims.
Microsoft’s President, Brad Smith, recently projected optimism about the potential for meeting the company’s climate targets; however, such assurances stand in stark contrast to the accelerating reliance on fossil fuels. Analysts highlight that this reliance compounds existing energy issues, as demand for AI processing power continues to surge at a pace outstripping that of renewable energy capacity.
The shift towards methane gas was also accelerated by data reflecting that on-site data centers previously accounted for just 5% of the methane power demand in the U.S. in 2024, a figure that skyrocketed to 39% by the following year due to the urgency of training and operating large language models. This transition raises alarming concerns about the financial burden placed on consumers and the potential public health implications linked to air pollution from fossil fuel combustion. Studies indicate that utilizing onsite methane gas for a single data center could incur health-related costs ranging from $53 million to $99 million.
Moreover, a 2021 study from Harvard University highlights the global health crisis associated with fossil fuel reliance, noting that one in five deaths can be traced back to air pollution caused by burning these energy sources. This underscores the larger ethical dilemma facing corporations like Microsoft that are venturing into such environmentally damaging practices.
Microsoft has long touted its commitment to sustainability, claiming to offset its energy use with renewable sources. However, industry analysts argue that these claims primarily reflect energy market transactions rather than a substantial commitment to dependable, green energy systems specifically fed into its data centers. The reality appears to be a growing divergence between Microsoft’s public narrative and its operational reality, stirring debate over trust and accountability in corporate environmental practices.
As society grapples with the pressing challenges of climate change and the paramount need for sustainable development, Microsoft’s current trajectory raises critical discussions among business leaders, stakeholders, and concerned citizens alike. The choice to revert to fossil fuels for powering AI initiatives not only contradicts the company’s earlier environmental objectives but also reflects broader market pressures and urgent demands for reliable energy. The future of AI technology may hinge significantly on how corporations align their operational practices with ethical and sustainable principles in an ever-evolving landscape.
-
This startup wants to replace marketing agencies with AI. Read the pitch deck it used to raise $4.5 million.
In a rapidly evolving digital marketing landscape, one startup is aiming to revolutionize the way brands connect with consumers. Uplane, a San Francisco-based startup, has introduced an innovative platform designed to replace traditional marketing agencies with advanced artificial intelligence technology. Recently, the company secured $4.5 million in seed funding, signaling strong investor confidence in its mission.
Founded in November 2022 by Julius Körfgen, Lukas Vollmer, and Marvin Abdel-Massih, Uplane targets brands that spend over $100,000 per month on digital advertising. The platform enables these brands to create, launch, and test ad campaigns across major digital channels, including Meta, Google, TikTok, and LinkedIn. What sets Uplane apart is its ability to steer ad spending toward the most effective campaigns, optimizing marketing budgets and maximizing return on investment (ROI).
Uplane not only facilitates the creation of ad campaigns but also allows clients to test website landing pages. With the option to upload their own ads or generate visuals using Uplane’s AI capabilities, businesses can quickly adapt to consumer preferences. The platform supports both text and video ads, enabling comprehensive engagement across different media formats.
The recent funding round was led by Play Ventures and included notable participation from Y Combinator and other investor groups. This backing should catalyze Uplane’s expansion efforts, helping the company grow its 15-person team and enhance its capabilities.
According to Körfgen, Uplane’s clients have experienced remarkably improved return on ad spending—an average of 30% more in just six weeks, with some clients, such as Aonic, achieving a staggering 60% improvement. This achievement underscores Uplane’s potential to disrupt the marketing agency model by delivering faster, data-driven outcomes.
Uplane’s innovative approach comes in response to a significant gap in the advertising landscape. While traditional marketing agencies offer a broad range of services, Uplane focuses on measurable, data-driven results. Körfgen emphasizes that AI can optimize tasks such as ad creation, landing page design, and performance tracking more efficiently than human counterparts. “Each of those steps, AI can do faster and better than humans,” he remarked.
Interestingly, the concept of Uplane emerged from Körfgen’s previous experiences at Enpal, a performance marketing company. Recognizing the potential for AI to enhance traditional marketing processes, he joined forces with Abdel-Massih and Vollmer to create a solution that prioritizes efficiency and effectiveness.
Uplane is not without its limitations. As it stands, the platform does not currently offer audio as part of its advertising strategy, focusing instead on measurable outcomes rather than strategic brand storytelling. This choice raises questions about the potential drawbacks of relying solely on AI for marketing efforts, particularly regarding the depth of brand management that experienced agencies typically offer.
Despite the competitive landscape, which includes well-funded rivals such as Smartly and Pomo, Uplane’s unique proposition lies in its fusion of artificial intelligence with real-world marketing performance. As the startup prepares to grow its customer base and integrate with additional ad platforms, investors are keenly observing its progress.
For businesses looking to harness the power of AI in their marketing efforts, Uplane’s recent achievements serve as a powerful example of what is possible in the future of advertising. With its clear focus on ROI and efficiency, Uplane is poised to shake up the conventional marketing agency model, providing brands with the tools they need to stay ahead in a dynamic digital environment.
To delve deeper into Uplane’s innovative vision and explore the pitch deck that secured its funding, visit the link below:
-
Wells Fargo Scales AI to Meet Surging Customer Demand
In a rapidly evolving digital landscape, Wells Fargo is making strides to keep up with the surging demand from its customers. During the first quarter earnings call on April 14, executives highlighted the bank’s commitment to enhancing its digital offerings in response to changing customer expectations.
Wells Fargo’s Chairman and CEO, Charlie Scharf, emphasized the necessity of modernizing their digital platform, particularly in providing seamless mobile experiences alongside traditional in-person services. This approach is a direct response to the increasing number of mobile active users, which has surpassed an impressive 33 million. The bank reported a 14% increase in Zelle transactions year over year, and its AI-powered virtual assistant, Fargo, has now engaged in over 1 billion interactions since its launch less than three years ago.
The bank garnered attention recently with its innovative digital initiatives, including a trademark application filed on March 10 for a new digital asset-centric platform named “WFUSD.” This platform seeks to offer services around asset tokenization, cryptocurrency payment processing, and the execution of digital asset trades. Although executives refrained from discussing digital assets during the recent earnings call, they acknowledged the challenges posed by technological advances such as AI and digital assets on the competitive banking landscape.
Wells Fargo identifies scaling AI responsibly as paramount in their objectives. The ongoing adoption of the Wells Fargo Mobile app reflects this trend. Mobile active users grew significantly, from 31.8 million in Q1 2025 to 33.5 million in Q1 2026, underlining the popularity of digital banking tools. Alongside this growth, customers are leveraging these digital resources for rapid, secure, and personalized financial interactions.
Moreover, as part of its expanding digital strategy, Wells Fargo experienced noticeable growth in its card business due to investments in digital tools. Scharf pointed out that increased advertising in both the card and broader consumer businesses is driving this positive trend, underscoring the vital role of targeted digital marketing in attracting and retaining customers.
To further bolster its AI capabilities, Wells Fargo appointed Faraz Shafiq, an Amazon Web Services (AWS) executive, as the head of AI products and solutions effective February 9. This strategic move aims to strengthen Wells Fargo’s vision and roadmap for AI-powered products, enhancing operational efficiency and customer satisfaction.
As the bank navigates through the complexities of integrating AI and digital assets, it remains focused on balancing innovation with cybersecurity and customer trust. Mentioning the potential for technological disruptions, Wells Fargo’s executives signaled a proactive stance in recognizing and mitigating possible risks associated with these advancements.
This commitment to innovation, alongside strategic appointments like Shafiq’s, highlights Wells Fargo’s vision of transforming its banking services into a more robust and digitally connected ecosystem. As they continue to scale AI and expand their digital services, customers can expect even more personalized and efficient banking experiences.
In conclusion, Wells Fargo is not only scaling its AI to meet customer needs but is also strategically positioning itself in the competitive landscape of digital banking. With an increase in mobile users, an expanding virtual assistant footprint, and innovative digital asset solutions on the horizon, Wells Fargo is on track to redefine banking in the digital age.
-
Ataccama Helps Financial Institutions Meet EU AI Act Requirements with Pipeline-Level Data Validation
In a pivotal move for the financial sector, Ataccama has unveiled its Ataccama ONE data trust platform that aims to equip financial institutions with the necessary tools to comply with the stringent requirements of the EU AI Act. Announced on April 14, 2026, this introduction comes as organizations prepare for the enforcement of the Act starting August 2, 2026. The new legislation demands that financial entities utilizing high-risk AI systems, such as those involved in credit scoring, anti-money laundering monitoring, and fraud detection, demonstrate their adherence to rigorous data quality standards.
The EU AI Act, particularly Article 10, marks a significant transition from merely documenting governance policies to producing verifiable, data-level evidence linked to specific model decisions. This means organizations are no longer only required to have robust model risk frameworks in place; they must also be able to provide clear proof that their data, at the moment of use, was relevant, representative, and free from errors. Altogether, the goal is to ensure that AI-driven decisions are not only performed with integrity but can also be audited and verified.
Many financial organizations have grappled with the reality that while they may have risk and data management frameworks, there are often substantial gaps in their implementation and control, leaving them ill-prepared to produce the required regulatory evidence. For instance, regulators are no longer merely assessing existing controls, but are instead interested in straightforward answers about the specific data a model has utilized and the verifiability of that data at the time of its application.
As Jonathan Paul, VP of IT Governance at Fifth Third Bank, stated, “At our scale, trusted data is essential for responsible AI and regulatory compliance. We need continuous visibility into data quality and the ability to demonstrate that our data meets defined standards.” This perspective sums up the pressing need for a solution that offers transparency, accountability, and confidence to financial institutions stepping into a new frontier of data governance and AI deployment.
Ataccama ONE addresses these challenges innovatively by providing real-time validation of data as it flows through training and inference pipelines tied to high-risk AI models. These pipelines cover various essential functions, including credit decisions and AML screenings. The platform employs business-defined rules to validate essential data attributes; for example, it checks the completeness of borrower information, ensures the validity of transaction records, and assesses the consistency of risk signals prior to the data being utilized downstream for decision-making.
To effectively manage data quality, Ataccama ONE integrates configurable quality gates that can halt or flag a pipeline run if a violation is detected. Alongside this, alerts are dispatched to the respective data owners, and a remediation ticket is automatically generated in the team’s ticketing system through API integration. This seamless approach ensures that data quality issues are addressed within a structured workflow rather than allowing them to propagate through the system unnoticed.
The platform meticulously logs each validation outcome with full context of the pipeline, providing a traceable record of whether the utilized data met the defined standards at the time it was incorporated into a decision. This meticulous documentation is crucial in an era where compliance-related implications could have significant legal repercussions. As the EU AI Act comes into play, the ability to showcase compliance through trusted data governance will be paramount.
Most organizations operating within the financial domain tend to assume that their data is governed effectively. However, Ataccama aims to highlight a critical gap in the understanding of whether that data is indeed fit for purpose. With the looming obligations set forth by Article 10 of the EU AI Act, being equipped with the tools to meet these new standards will differentiate compliant institutions from those that are left vulnerable.
In conclusion, Ataccama’s new platform represents a significant advancement in the landscape of data validation and AI compliance for financial institutions. By enabling a robust governance framework capable of producing audit-ready evidence and maintaining data quality across AI systems, Ataccama is leading the charge towards a predictable and accountable future in the realm of financial technology.
-
Blind artist first to run a marathon guided by AI glasses
In a groundbreaking achievement, Clarke Reynolds, a blind artist and advocate, has become the first individual to successfully run a marathon with the assistance of AI glasses, demonstrating the transformative potential of technology in enhancing the capabilities of visually impaired individuals.
Competing in the Brighton Marathon on Sunday, Reynolds utilized Meta AI glasses in combination with the innovative app Be My Eyes. This app allows visually impaired users to connect with remote volunteers who can assist them in various tasks, from choosing an outfit to navigating their surroundings. Reynolds is believed to be the first person to harness such technology for a marathon, completing the demanding 26.2-mile course to raise funds for Fight for Sight, a charity focused on eye health research.
“I’ve raised awareness and sparked so many conversations which I hope will help to challenge society’s ideas about what blind people can do,” said Reynolds following his accomplishment. His effort involved a collaborative spirit, drawing support from hundreds of volunteers across the globe—from Croydon to Kansas and Belfast to Bahrain—who helped guide him visually as he ran.
The Meta AI glasses allowed these pre-selected volunteers to see exactly what Reynolds encountered in real-time, cheering him on throughout the marathon and making adjustments where necessary. Reynolds completed the race in just under six hours and twenty minutes, an impressive feat given the challenges he faced.
Reflecting on his training experiences using Be My Eyes, Reynolds detailed how he would connect with volunteers for assistance, often juxtaposing their expectations to help with simple inquiries with the reality that he was preparing for a marathon. “Some have even offered to sponsor me,” he shared, underlining the human connection fostered through this technology.
To date, he has successfully raised £2,700 for Fight for Sight, with key sponsorships including prominent personalities such as TV presenter Victoria Coren Mitchell. Reynolds expressed his heartfelt gratitude for the engagement and interest shown by the public in his unique marathon journey, emphasizing the importance of sharing knowledge about eye conditions and support initiatives.
Reynolds has an inherited eye condition known as retinitis pigmentosa, which has significantly affected his vision, leaving him with only 5% vision—similar to what one might see when looking underwater. This personal challenge has fueled his desire to advocate for those with similar experiences and to demonstrate what can be achieved with technological assistance.
His previous experience includes running the London Marathon with a guide, ensuring safety while challenging himself to push beyond conventional boundaries. He was accompanied this time by trained guide runner Alastair Ratcliffe, reinforcing the importance of safety while navigating such a demanding course as a visually impaired runner.
Eleanor Southwood, the director of impact and external affairs at Fight for Sight, articulated the pride shared by the organization in Reynolds’ extraordinary achievement, praising the impact of his journey on raising awareness and funds for essential research and support projects related to eye conditions.
This event showcases the vital intersection of technology with personal ambition and advocacy. As advancements in AI and volunteer-driven applications continue to evolve, we can anticipate a future where more individuals with disabilities are empowered to achieve their goals, breaking down barriers across various fields. Reynolds’ story is not just one of personal triumph, but a beacon of hope and a catalyst for change in societal perceptions regarding the capabilities of individuals with disabilities.
-
Japan’s Tech Titans Just Teamed Up to Build a Trillion-Parameter AI—And It’s Not Here to Chat
In a significant move for the technology landscape, major Japanese corporations have banded together to create a groundbreaking venture that aims to develop a trillion-parameter AI model. This initiative is set to steer clear of the conversational AI realm, instead focusing on what many experts refer to as “Physical AI,” which encompasses robotics, autonomous vehicles, and industrial machine operations. SoftBank, NEC, Honda, and Sony Group, each contributing over 10% to this new enterprise, are joined by a consortium of major banks and steelmakers, signaling a broad industrial coalition supporting cutting-edge AI developments.
This newly formed company, aimed at creating an AI model with trillion parameters, represents a transformative step for Japan. Unlike traditional AI systems that assist in conversational tasks, the focus here is on automating physical processes. This evolution in AI strategy is emblematic of Japan’s long history with robotics and its robust industrial base, which are now seen as valuable assets in the quest for innovation. The belief is that other global tech hubs, such as Silicon Valley and Beijing, might struggle to replicate this unique combination of expertise.
At the helm of this ambitious project, SoftBank and NEC are expected to spearhead the technical development of the AI model, while Honda will leverage the results to enhance its autonomous driving capabilities. Sony’s contributions will include its advanced robotics and gaming technologies, combining strengths to create a powerful AI system that can operate machinery and enhance efficiency across various sectors. Notably, Preferred Networks, a respected AI developer based in Tokyo, has also pledged its involvement in this cutting-edge project, which is expected to yield tangible results in the coming years.
The newly formed establishment is significantly backed by Japan’s national government through the New Energy and Industrial Technology Development Organization (NEDO), which has committed approximately ¥1 trillion (around $6.7 billion) over five years to support AI initiatives, starting in fiscal year 2026. This financial commitment underscores the strategic priority Japan is placing on developing domestic AI capabilities, ensuring that Japanese data remains within the country and is utilized innovatively without reliance on foreign cloud infrastructure.
Japan’s attempt to build a self-sufficient AI ecosystem represents an important pivot from its historical dependence on U.S. cloud services. Previously dubbed the “digital deficit,” this vulnerability has seen Japan sending vast amounts of data overseas, thus creating a dependence on external technology platforms. The initiative to develop a trillion-parameter model domestically not only seeks to alleviate this dependence but also aspires to redefine the landscape of AI development in Japan by harnessing local talent and resources.
The establishment’s goal to maintain localized data practices and build a non-reliant AI framework indicates a conscious move away from the pattern of funneling resources into the broader AI systems built by international firms such as OpenAI and Google. This strategy reflects a desire for independence and a more localized approach to technological development, aiming to retain economic benefits within Japan and build a robust infrastructure that supports long-term growth and innovation.
The scope of this new venture is noteworthy, especially considering the strategic investments that have drawn in both banks and steelmakers like Nippon Steel, Kobe Steel, and major banking entities such as MUFG Bank and Mizuho Bank. This vast backing amplifies the potential impact of the company, positioning it not as a mere startup project but as a formidable player within Japan’s tech ecosystem, with strong implications for various industry sectors.
With the global interest in Physical AI on the rise, this initiative comes at a time when companies like Tesla are pursuing their robotics ambitions, and OpenAI is focusing on supporting AI and robotics startups worldwide. Japan’s timely investment into this area reflects its commitment to maintaining technological competitiveness on the global stage. The outcome of this undertaking may very well redefine the future of AI in Japan and set a new standard of excellence in the Physical AI domain.
-
Strengthening the hands of agentic AI
Artificial intelligence is undergoing a transformation that transcends mere technological evolution; it marks the dawn of a new industrial revolution. According to Satyakam Mohanty, co-founder and managing partner of Wyser Capital, the firm is ardently investing in agentic AI startups. These are companies that create AI systems capable of not just analysis, but also execution—a capability that could revolutionize enterprise operations. With a sizable ₹200-crore fund, which also includes an ₹80 crore greenshoe option, Wyser Capital aims to back a new generation of IP-led enterprise technology startups.
Wyser Capital’s investment strategy emphasizes operational transformation through agentic AI, which enables autonomous execution of tasks. Moving beyond the current hype surrounding generative AI—systems focused on content creation and insights—agentic AI opens a myriad of possibilities in various industries, marking a vital transition in their operational frameworks. The firm’s proactive approach to identify and support startups that realize these potentials reflects an important investment thesis for future technology adaptation.
Since its inception in 2024, alongside partners Suresh Vaswani and Supria Dhanda, Mohanty has highlighted the urgency and necessity of strengthening Indian startups in the agentic AI space. The planned deployment of capital across approximately 25 startups over the next two to three years marks a strategic initiative to amplify the impact of this technology in enterprise settings. Initial investments—ranging between ₹2 crore to ₹5 crore for seed-stage companies—position Wyser Capital as a significant player in the evolving landscape of enterprise technology.
A critical element of Wyser’s assessment process revolves around determining whether an AI startup is truly enterprise-ready. Mohanty notes that many founders underestimate the complexities involved. It’s not enough to have a technically sound product; potential enterprise customers require assurance on various additional layers including security, access control, compliance certifications, and reliability. This multifaceted qualification process underscores the hurdles startups must consider to gain traction in enterprise environments.
Revenue generation timelines for AI startups vary significantly based on the type of solutions they are developing. While software-based solutions might witness early proof-of-concept deployments within four to six months, companies can generate revenue within six to eight months of launch. Conversely, those involved in physical AI systems—such as robotics or hardware integrated with AI—can expect timelines extending into two to three years before they see substantial revenue. Such insights into revenue generation can be instrumental for aspiring entrepreneurs looking to navigate the complex AI landscape effectively.
Despite the burgeoning potential of India’s AI startup ecosystem, structural gaps persist that could hinder progress. Mohanty emphasizes the need for patient capital, noting that deep-technology ventures require longer maturation periods. This aspect is often overlooked by investors seeking quick returns, which could stifle innovation in the agentic AI domain. Addressing these gaps will be crucial for establishing a robust and competitive AI industry capable of matching global standards.
In conclusion, as Wyser Capital ventures into the expansive landscape of agentic AI, it lays the foundation for an exciting phase of technological evolution in enterprise solutions. The focus on startups that prioritize execution alongside analysis highlights a shift toward more integral, business-oriented approaches in technology deployment. For business leaders, product builders, and investors, these developments present a critical crossroad filled with opportunity and promise for driving efficiencies and transformative success within their operations.
-
GE Aerospace scales AI from pilots to production; India anchors global capability
In an era where artificial intelligence (AI) is revolutionizing industries, GE Aerospace stands at the forefront, transitioning AI from experimental projects to core operational strategies that significantly enhance efficiency and productivity. As the Executive Director of Data Science and AI, Dinakar Deshmukh, points out, the company has witnessed substantial improvements, including a remarkable 50% reduction in false positives and over a 60% decrease in lead times, thanks to machine learning-driven engine monitoring systems.
This cutting-edge technology employs complex algorithms to detect anomalies in commercial engines, anomalies that often elude human observation. Deshmukh emphasizes the considerable impact of these advancements on engine monitoring, remarking that they directly influence the reliability and safety of their products—a critical factor in the aviation industry.
Generative AI, while still evolving, is slowly but surely making its mark within the company’s operations. Deshmukh acknowledges that although they haven’t completely mastered generative AI, the applications currently in production are already yielding tangible business outcomes. In areas such as software development, they are observing productivity gains ranging from 20% to 25%. This level of improvement underscores the potential of AI in optimizing not just operations, but also enhancing overall performance.
Notably, GE Aerospace has strategically centralized its AI capabilities in India, where more than half of its AI team operates from Bengaluru. This decision aligns with a broader trend within major global companies to tap into India’s vast talent pool, particularly in technology and data science. With approximately 2,500 employees in India, the emphasis on AI development signifies GE’s commitment to fostering innovation in a growing market.
While the company is rapidly adopting AI, Deshmukh emphasizes a disciplined approach to its deployment. Rather than broadly applying AI across all departments, GE Aerospace focuses on operations critical to its business performance. Identifying complex areas where efficiency gains are most achievable allows the company to target its AI initiatives more effectively. This calculated strategy ensures that investments in AI lead to profound impacts on the production processes while minimizing resource waste.
In response to the mounting interest and demand for AI, GE Aerospace has ramped up its investments significantly, increasing its AI expenditures by 2.5 to 3 times over the past two and a half years. Such a commitment illustrates the company’s recognition of AI as a vital component of its business strategy, especially in an industry where innovation and efficiency are paramount.
However, scaling these AI solutions from proof of concept to full production poses its own challenges. Deshmukh candidly notes that this transition remains one of the most difficult aspects of integrating AI into the company’s operational fabric. To navigate this complexity, GE Aerospace integrates lean operating principles with AI, enhancing the scalability of their solutions. This synergy between lean methodology and AI ensures that the systems put in place not only scale effectively but also respond dynamically to the varying needs of their operations.
GE Aerospace’s approach to problem-solving sets it apart from typical methodologies; Deshmukh asserts, “Our approach is to let the problem define the model, not the other way around.” This philosophy reflects a stark shift in how businesses might approach AI—a move away from one-size-fits-all models towards tailored solutions that respond directly to specific challenges.
In conclusion, GE Aerospace exemplifies how companies can effectively harness the potential of AI, transforming challenges into opportunities for significant operational enhancements. As AI technology continues to evolve, the strategic applications and thought leadership demonstrated by GE Aerospace will likely serve as a benchmark for other organizations looking to successfully navigate the complex landscape of AI implementation.
